Have you tried LLM coding assistants with q/kdb+?

Has anyone had success using LLMs (Claude, ChatGPT, Gemini, Cursor etc.) with q/kdb+?

I’ve been experimenting with this challenge - LLMs seem to know q conceptually but struggle with the syntax and idioms. I’ve been brainstorming how to make current LLMs work; in the process I’ve built a q MCP server you can install with pip install qmcp (here’s the PyPI page). This allows Claude Code to keep banging its head repeatedly until it gets things right; I put a transcript with a blog article on Medium).

Curious about others’ experiences - have you found ways to make LLMs more effective with q? Any particular prompting strategies or tools that work well? Or is the language just too specialized for current models?

I’m also thinking of building a corpus of code to turn this MCP server into a coding assistant tool to make LLMs be really helpful; do you have any suggestions what sort of code collections / git repos I should use to have good coverage of different q coding patterns?

I’ve tried it a bit but they seem to get confused. Fortunately KX is working on training a ‘qLLM’ :slight_smile:

That’s exciting! I’d love to learn more about the qLLM project. Given that I’ve been working on this problem and have some working solutions and insights from the MCP server experiments, I wonder if the team might be interested in exchanging ideas or collaborating?

Is there a way to get in touch with them, or would you know who’s leading this initiative at KX? Thanks!