What metrics do you use to measure the code quality of Q code? And what tools are used?
I use the following measures/tools:
-
Does the code meet style guidelines: http://www.timestored.com/kdb-guides/q-coding-standards
(Short functional code, placed within one namespace per file, appropriately named variables/functions) -
Is each publicly exposed function/API documented.
http://www.timestored.com/qstudio/help/qdoc -
Has each publicly exposed function got unit tests that cover at least the most common cases:
http://www.timestored.com/kdb-guides/kdb-regression-unit-tests
This then combines into a module/version system that uses scripts to build, run historical/UAT tests, deploy etc. Defects found during each phase (including post release) I record to analyse it’s cost and why it was missed to hopefully avoid similar quality issues in future.
I’ve been working on two additions to the unit testing:
-
Gives you feedback on which of your functions have test coverage and which don’t, similar to emma for java.
http://emma.sourceforge.net/samples.html -
Takes the memory used and time taken, then warns if it’s significantly increased on past unit tests.
If you meant automatic code analysis of cyclomatic complexity, file/function length etc. that isn’t something I’d considered, an interesting idea some of which might be quite easy to write in q. In the end the best code quality tool is probably end users, they always find any bugs. ;)
Regards,
Ryan Hamilton