Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Re^2: Cyclomatic Complexity of Perl code

by EdwardG (Vicar)
on Jan 12, 2005 at 11:07 UTC ( [id://421562]=note: print w/replies, xml ) Need Help??


in reply to Re: Cyclomatic Complexity of Perl code
in thread Cyclomatic Complexity of Perl code

I empathise with your tales of woe, but I think you go too far in dismissing the value of metrics as source code instrumentation.

If one considers a given metric to be part of the whole picture, but at the same time does not confuse the single metric for the whole of the picture, I think metrics can help a great deal, particularly when dealing with large volumes of code, and large numbers of coders, and particularly in terms of informing decisions about where to spend code review and unit test effort.

It sounds like you have suffered from idiotic applications of misguided rules more than you have suffered from the metrics themselves, but perhaps that is the price of their existence; that they will be abused like so much else.

Out of interest, I generated these metrics for a part of my hobby chess engine -

MethodCyc. Compl.LOCBlank linesComment linesComment %Statements
GenMoves504372311426.09236
FENToPosition41117182723.0899
IsAttacked26193177438.34102
PieceToString1621000.0032
ColourOfPiece1416400.004
PieceToString1015000.0020
HitTheEdge1031900.0010
ASCToFEN95361935.8526
FirstBit8391000.0021
MoveToString828100.0017
Slide5256416.0015
GetMoveCount421600.0012
PrintMoveList25000.002
LeftEdge14000.002
RightEdge13000.001
TopEdge13000.001
BottomEdge13000.001
SquareToString13000.001
SquareToString13000.001
SquareToRank13000.001
SquareToFile13000.001
ASCToPosition13000.001
Warn13000.001

As a profile of this part of the engine, I found this very interesting. For instance,

  • why do four of the complex methods have no comment lines at all?
  • Why is FENToPosition so complex? (this was a surprise)
  • Is Slide() under-commented at 16%?
  • What (if any) is the correlation between lines of code (LOC) and Complexity?
  • Is this consistent with other code I've written?
  • How do all these stats compare to other projects?
  • Does this align with my intuitive sense of their complexity?
  • Is there a code density metric I can derive from LOC and statements?

Now some of these thoughts I could have had by simply browsing the code, but I hope I'm illustrating the usefulness of these metrics as a high-level view. I find them provocative in a highly beneficial way.

 

Replies are listed 'Best First'.
Re^3: Cyclomatic Complexity of Perl code
by BrowserUk (Patriarch) on Jan 12, 2005 at 17:21 UTC
    I empathise ... It sounds like you have suffered from idiotic applications of misguided rules

    No need for empathy. It was a long time ago, and we didn't suffer :) The result of the study, was that we rejected both the tool and the idea.

    With respect to your statistics. One set thereof does not a case make.

    I would derive two things from my reading of the numbers.

  • Three modules are heavily over commented.
  • Large modules are more complicated than small ones.

    What I cannot say from those numbers is whether the complexity arises as a result of

    • the needs of the algorithm required to perform the function of those methods.
    • because a bad algorithm has been used.
    • because a good algorithm has been badly implemented.
    • because the "complex" (read:big) methods do too much.

    Indeed, without inspecting the source code, I cannot even tell the accuracy of those metrics.

    It could be that "HitTheEdge" contains a recursive algorithm that is extremely complicated to follow and modify, but simple in it's code representation.

    Or that "GenMoves" is a huge if/then/else structure that would be better implemented as a dispatch table

    Or that "Slide" uses a string eval to replace itself with an extremely complicated subroutine that the source code analyser sees simply as a big string constant.

    And that's my point. You are already looking to derive further metrics from the generated metrics, but there is no way to validate the efficacy of those metrics you have, beyond inspecting the code and making a value judgement.

    So you already falling into the trap of allowing the metrics to become self-serving, but the metrics themselves are not reproducible, scientific measurements, they are simply "indicator values".

    When you measure the length, mass, hardness, reflectivity, temperature, elasticity, expansion coefficient etc. of a piece of steel, you are collecting a metric which can be reproduced by anyone, anywhere, any time. Even if the tools used to make the measurement are calibrated to a different scale, it is a matter of a simple piece of math, or a lookup table to convert from that scale to whichever scale is needed or preferred. This is not the case for any of the metrics in your table.

    You don't say what language your program is coded in but I could (probably) take all of your methods and reduce them to a single line. It would be a very long line, but in most languages it would still run perfectly well. What affect does that have on your metrics?

    Equally, we could get half a dozen monks (assuming Perl) to refactor your methods according to their own formatting and coding preferences and skill-levels. And even if they all do a bang-up job of making sure that they reproduce the function of your originals--bugs an all--and if we then used the same program to measure their code as you have used, they will all produce different sets of numbers.

    And that is the crux of my distaste for such numbers. They are not metrics. They do not measure anything! They generate a number, according to some heuristic.

    They do not measure anything about the correctness, efficiency or maintainability of the code that gets run, they only make some guesses, based upon the way the coder formatted his source code.

    1. They do not conform to any standards.
    2. They are not transferable between programmers.
    3. They are not transferable between algorithms.
    4. They are not transferable between programs.
    5. They are not transferable between languages.
    6. They are not transferable between sites.
    7. They are not tranferable between design or coding methods. (OO, procedural, functional etc.).
    8. They are not transferable between assessment methods or tools.

    In short, they are not comparable, and you cannot perform math with them.

    As proof of this, take a look at your "PieceToString" and "HitTheEdge" methods. They have an equal 'complexity' when measured by the same tool. Is this obvious, or even definable from looking at the source code? If I am given two pieces of steel 10 cms long, even without measuring them with a rule, I can easily tell they are the same length. No such comparison is possible for source code.

    The tool has become the only way of comparing source code, and as it does not (and could not) adhere to any standard, all measurements are relative, not absolute. So, unless everyone agrees on which tool/language/coding standards etc. etc. to use, there is no way to compare two versions of the same thing.

    That means that in order to make comparisons, you have to implement every possible (or interesting) version of the source code, before you can make any inference about whether any one is good or bad.

    And even if you could code every possible implementation of a given algorithm, and could prove that they all produced exactly the same results, and you generated your numbers: What would it tell you?

    Should you pick the version with the lowest complexity rating? The shortest? The longest? The one with the highest ratio of comments?

    Would you make any choice based on the numbers alone? Or would you have to look at the source code?

    If you admit that you would have to look at the source code, then you have just thrown your "metrics" in the bin in favour of your own value judgement.

    And if you didn't, then you should publish the formulea by which you are going to juggle all those numbers in order to make your decision. It should make for interesting reading.


    Examine what is said, not who speaks.
    Silence betokens consent.
    Love the truth but pardon error.

      You seem to be confused about the purpose of metrics. They are NOT an end in themselves, and they are NOT useful in isolation from the code. In demanding such things, and exaggerating the abuses therein, you are setting up straw men and knocking them down in puffs of rhetoric.

       

        I am anything but confused. On this subject, I am very clear. But okay. I'll play. You explain it to me.

        Exactly what use are you going to make of the numbers in your table above?

        But be warned! I've been here before. The moment you explain a use of those numbers in your table, you will be making judgements based upon them. And the moment you do that, you are using the numbers to in some way represent the code they are derived from.

        If those numbers can only be used in conjunction with the code itself, then what part do the numbers play? What purpose do they serve.

        If you answer that question honestly, you'll see that nothing in my post was rhetoric. It is all based upon having been there, and done that, and seen the effects that it has.


        Examine what is said, not who speaks.
        Silence betokens consent.
        Love the truth but pardon error.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://421562]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (4)
As of 2024-03-19 06:45 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found