mmap increasing every time table is queried

Hi all,

We have a table with a mix of atom and string columns. Every time we query certain date partitions with a simple select:

 

 

select from table where date = x

 

 

we see the process mmap usage increase. The only way we know to reduce this is by restarting the process.

We tested querying with subsets of return columns, and found that if we exclude 4 string columns (string1, string2, string3, string4) from our result, the mmap doesn’t increase.

The value that mmap increases by varies with the columns returned, for example:

  1. If we only return (string1, string2, string3, string4) columns, the mmap doesn’t increase
  2. If we return all columns, mmap increases by 2461536 bytes
  3. If we exclude any 3 out of the 4 string columns, the mmap increases by 2461536 bytes also
  4. If we return 1 - 4 of the string columns along with the virtual column “date”, the mmap doesn’t increase
  5. If we return 1 - 4 of the string columns along with an increasing number of other columns, the mmap value increases in steps (shown in snippet below)

Would anyone be able to shed any light on this behaviour?

FYI we looked at the column values for the 4 string columns, and confirmed all values within are of type 10h.

We are using kdb+ 3.5 2017.10.11 

We don’t assign the result of the query to a variable

 

Thanks,

Eoghan

 

 

// .Q.w output after returning increasing numbers of columns, where one column is one of the 4 aforementioned string columns // delta_mmap added to show difference between mmap values // Note we run .Q.w before and after the select statement, the difference between the two runs is what is shown below numCols used heap peak wmax mmap mphy syms symw delta_mmap ------------------------------------------------------------- 2 160 0 0 0 87912 0 0 0 87912 3 160 0 0 0 175824 0 0 0 87912 4 160 0 0 0 219780 0 0 0 43956 5 160 0 0 0 263736 0 0 0 43956 6 160 0 0 0 307692 0 0 0 43956 7 160 0 0 0 351648 0 0 0 43956 8 160 0 0 0 395604 0 0 0 43956 9 160 0 0 0 483516 0 0 0 87912 10 160 0 0 0 527472 0 0 0 43956 11 160 0 0 0 571428 0 0 0 43956 12 160 0 0 0 615384 0 0 0 43956 13 160 0 0 0 659340 0 0 0 43956 14 160 0 0 0 747252 0 0 0 87912 15 160 0 0 0 835164 0 0 0 87912 16 160 0 0 0 879120 0 0 0 43956 17 160 0 0 0 923076 0 0 0 43956 18 160 0 0 0 967032 0 0 0 43956 19 160 0 0 0 1010988 0 0 0 43956 20 160 0 0 0 1054944 0 0 0 43956 21 160 0 0 0 1142856 0 0 0 87912 22 160 0 0 0 1186812 0 0 0 43956 23 160 0 0 0 1230768 0 0 0 43956 24 160 0 0 0 1318680 0 0 0 87912 25 160 0 0 0 1362636 0 0 0 43956 26 160 0 0 0 1450548 0 0 0 87912 27 160 0 0 0 1494504 0 0 0 43956 28 160 0 0 0 1582416 0 0 0 87912 29 160 0 0 0 1626372 0 0 0 43956 30 160 0 0 0 1714284 0 0 0 87912 31 160 0 0 0 1758240 0 0 0 43956 32 160 0 0 0 1802196 0 0 0 43956 33 160 0 0 0 1846152 0 0 0 43956 34 160 0 0 0 1890108 0 0 0 43956 35 160 0 0 0 1934064 0 0 0 43956 36 160 0 0 0 1978020 0 0 0 43956 37 160 0 0 0 2065932 0 0 0 87912 38 160 0 0 0 2153844 0 0 0 87912 39 160 0 0 0 2241756 0 0 0 87912 40 160 0 0 0 2285712 0 0 0 43956 41 160 0 0 0 2329668 0 0 0 43956 42 160 0 0 0 2373624 0 0 0 43956 43 160 0 0 0 2417580 0 0 0 43956 44 160 0 0 0 2461536 0 0 0 43956

 

 

First suggestion would be to test against the latest version of 3.5.

Several fixes were released after the version you are using.

Further supporting the suggestion to update Q, this blog post might be of interest to you. Specifically the  ANYMAP  feature that was added in v3.6.
“The anymap structure within the files provides a format which is mappable, as opposed to previously unmappable non-fixed-width records”. Strings are a non-fixed-width records which would explain the values in mmap you’re experiencing. Further reading on this can be found in the release notes

Will get a newer version and test, thanks

Thanks for the info  . We were thrown by the fact that we had ~200 date partitions for this table, and only saw this behaviour in 3 of them.

The data doesn’t have any clear differences that we can see

That is a bit odd being unable to find the exact cause for the mmap increasing, with similar data not reflecting the same trend. If it persists after upgrading Q versions lets investigate, and if the behaviour is resolved with the update then we can say that the issue was covered by  ANYMAP and maybe that can guide you towards what the differences in data were that you’re seeing in the older version of Q.

Yeah sounds like a good approach. We haven’t gotten to upgrade yet, in the meantime I see errors like this in my HDB when I’ve queried the same table a number of times

./2021.07.27/orders/aSymbolColumn. OS reports: Cannot allocate memory

When this happens .Q.w looks like the below example, where mmap has ballooned but used is ok:

used 204451472 heap 805306368 peak 3422552064 wmax 0 mmap 5866853440032 mphy 809955348480 syms 453383 symw 44899939

 

  1. Do you know what version of kdb+ wrote the files?
  2. Is the data compressed?
  3. Are there attributes on any of the effected columns?
  4. Are any of the columns linked columns?
  5. If you go through the columns one at a time do their counts all match?
  6. Can you read the bad partitions in q and  write them to a temp HDB location. Do these rewritten files still show the same memory behaviour when you load the temp HDB?
  1. I don’t, most likely the same as we are using now
  2. No
  3. No
  4. No
  5. No, the four columns that cause the mmap to increase all have count of 22210, whereas the rest have a count of 33199
  6. When I re-write a bad partition and load it, all columns have a length of 22210 and I don’t see any mmap increase when I query the table

All columns in the splayed table should have the same number of rows so there was some issue with the writedown of this data. This is most likely the source of the issue.

 

Can you recreate the data from source/backup/TP-logs?

When you read/write you are losing 33199-22210=10989 rows of data from the “good” columns.

The TP logs from those dates have been deleted in housekeeping jobs. We’ll investigate the process logs related to these write downs if they still exist, thanks!