segmented hdb / 32-bit not enough storage error

I’m receiving the message ‘Not enough storage is available to process this command.’ when attempting to load a large database with slave processes.

I’m attempting to use sub-processes to query a large database on the 32-bit version of kdb. I’ve modified buildhdb.q to us 5 segments. I’ve also tried it with 15 segments, which puts each segment slightly over 500M

I’ve started up each of the slaves on different ports and then in the master I execute

q).z.pd:`u#hopen each 5000+til 5                                                            

q)\l start/db                                                                               

k){if[~.d=*f:!x:-1!x;:f!x f];f:x.d;+f!$[“/”~*|$x;x;x f]}                                  

'seg/d3/2013.05.06/nbbo/bsize: Not enough storage is available to process this command.     

@                                                                                           

`:seg/d3/2013.05.06/nbbo                                                                    

symtimebidaskbsizeasize                                                               

I see the master q process has 4gb of commit size in windows task manager.

q).Q.w             

used| 181552         

heap| 67108864       

peak| 67108864       

wmax| 0              

mmap| 0              

mphy| 4294967295     

syms| 757            

symw| 32698          

I have 16gb of physical memory on this machine

Is it correct to load start/db in the master process or is there some other way I should configure the master process to allow queries to be executed on the slaves?

I tried following https://groups.google.com/forum/#!searchin/personal-kdbplus/.z.pd/personal-kdbplus/i5IAnvK-S9M/v74MxVxYqiAJ with no success

My platform is windows

It looks like there’s a difference in how segmented databases get initialized compared to partitioned databases. Running under strace, it looks like the segmented database tries to mmap each of the partitions. This will will exceed the address space with a sufficiently large db. When I strictly use partitions and no segments, k will read the table definition (.d file), but will mmap the file only when executing the query and then munmap later.

I have been able to create a hdb that is over 5gb with a partition per month. I still have the issue where I run into wsfull doing various calculations, but that’s to be expected