|Perl Monk, Perl Meditation|
Re: Measuring programmer qualityby w-ber (Hermit)
|on Oct 27, 2007 at 19:39 UTC||Need Help??|
While this is comparing apples and rubber boots, do you measure the quality of a novel by the number of words or spelling errors or how well the story is decomposed into chapters? Or the average time to read by the "average reader"?
Because the ultimate product of a programmer is the source code, it is impossible to resist the temptation to measure some metrics for it. The number of lines of code is possibly the first and also the fuzziest one: what is a line of code? Is it a statement of the programming language you use? An expression? A function call? A code block?
In assembly language, one line of code does not do much, perhaps adding an integer to a number in the contents of a register and saving that to another register. In a highly domain-specific language a line of code can, for instance, parse in an XML file and extract the needed data while updating a progress counter. Not only this, but in programming languages where you can overload operators such as + and *, using "var := 1 + 1;" and "var := object1 + object2;" can be of completely different complexities: the former is (likely!) just summing two integers, the latter can be anything from summing boxed numbers to computing the powerset of two sets. (Yes, that has little to do with the symbol +.)
What meaningful things does "a line of code" here measure? Should the former be counted towards "lines spent" and the latter "lines saved", because the latter hides more complexity? Or vice versa? How can you even automate counting something like this? What meaningful things does word or line count tell of a novel or a dissertation? The size on disk, perhaps.
The number one reason why this metric is so used is that it's trivial to compute.
The problem is that the important things, the things that really matter in programming are inside your head. (What a lame thing to say. Of course they are.) The source code is the ultimate product, but of even more importance is how you came up with the source code. What kind of solutions did you use? How did you solve the problems? Are there other solutions? Why did you pick these ones? How did you figure out how to implement the solutions?
Equally relevant (and something that might be possible to measure) is does the program meet the requirements set? If you used a formal specification language or some other means to capture exactly what the requirements are, it would be possible to check if the program does or does not or which particular requirements are not met. You can encode some of the requirements in unit tests, but not all by far.
Personally, I use a strange combination of the quality of documentation (I usually have more documentation than source code, but this doesn't tell much), "amount of decoupling" (meaning no more than one concern per module or responsibility per class), light unit tests, and that sense of having the "right" solution. I can't make these explicit, sorry.