Query Performance

The following query takes <1millisecond to run returning 61 records.

select sym,bar,timestamp:floor((long$(“p”$bar))-long$1970.01.01D00:00)%1e6,open,high,low,close,volume, dx:0 from forex_bar where bar within (2017.08.18T10:00;2017.08.18T11:00),sym=GBPUSD

When adding dx functional select it adds 10ms to the query.

select sym,bar,timestamp:floor((long$(“p”$bar))-long$1970.01.01D00:00)%1e6,open,high,low,close,volume, dx:raze {[x;y]?[fxsc1_bar;((>=;bar;(-;x;00:06:00.000));(=;sym;enlist y));0b;enlist [close]!enlist (first;close);10]close} '[bar;sym] from forex_bar where bar within (2017.08.18T10:00;2017.08.18T11:00),sym=GBPUSD

Could anyone suggest a way to speed it up as in some cases query times go from 1-2ms to 100-200ms with the functional select added.

if you increase the time period in your ‘where’ clause to two hours, does your query time from 2-4ms to 200-400ms?
can you write dx without ‘each’?

Looking at your code, the functional select and iteration seems unnecessary, this increases the complexity to O(n), I “think” you can eliminate the iteration by using binr,

select sym,bar,timestamp:floor((long$(“p”$bar))-long$1970.01.01D00:00)%1e6,open,high,low,close,volume, dx:close bar binr (bar-00:06:00) from forex_bar where bar within (2017.08.18T10:00;2017.08.18T11:00),sym=GBPUSD

Thanks for the replies.

@effbiae Yes, if increase the timeframe the query times go up a lot.

@Ajay The problem is that the data is inconsistent which appears to not fit with the KDB model, have instead put the update in to a timer event where the upd sets a flag and the code only pulls the data it needs to update, this seems to work more efficiently. It purely is a subselect for a specific value, changed it from time based to bar based, it’s not ideal but it does work and appears to be faster.

dx:raze {[x;y;z] -1#?[forex_bar;((<=;bar;x);(=;sym;enlist y));0b;enlist [close]!enlist (z xprev;close);-500]close} '[bar;sym;dz]