Fast lookups on a keyed table with many inserts/deletes

I have an in-memory table like

 

q)n:100000; t:([id:n?0Ng] px:n?1000; qty:n?1000)

q)t

id                                  | px  qty

------------------------------------| -------

14891235-0058-c170-a6a5-19ed0b40db8e| 555 257

ef32924d-4756-7841-2d6e-fa85c2d378b8| 585 858

e256cb74-c92c-527f-ad0c-849a5d50f6e7| 829 585

8b8e0fdb-82a3-00cc-c6cb-9227c8fef6cd| 923 90

 

And I want to perform millions of single lookups on id, single deletes on id, and single inserts.  Lookups can be sped up by 100x by using an attribute like `g#, but the problem is the attribute is lost every time I delete from the table.  Reapplying the attribute after every delete is not an option because reconstructing the index is too costly.

 

The only possibility I can think of is to create another column that simply flags rows for deletion and then periodically doing a mass delete on all flagged rows when the tables grows too large and reapplying `g#.  Is there any better option?

Yes.

Given that data structures in q are mostly linear (or linear at the core), performing deletions often will incur very heavy performance penalty when data get large. A better way is to keep appending and performance logical deletes (a.k.a. flag as deleted) instead.