But a friend of mine, who I respected, took exception to some of its advice. In chapter 5.5 it cited research on the optimal length of subroutines, and concluded that the evidence for benefits from very short routines (say under 20 lines) is scant, but routine lengths over about 200 lines start getting much worse. Chapter 15.2 has similar advice on loops, they should not exceed one page in length, though in practice it is rare for good programmers to want more than 15-20 lines.
However this friend, who had been doing OO programming for a long while found that conclusion absurd. Competent OO programmers with a lot of experience tend to go for shorter methods, often far shorter. 10 lines is pretty common. And I had to admit that my subjective experience says that this is good. Short routines really do make a difference.
So why is the research that Steve McConnell found so at odds with our direct experience? I think that the answer lies in the second study he cites (by Shen et al, in 1985), that length is not correlated with errors, but complexity is. This point is revived in section 17.5 with his discussion of Tom McCabe's measurement of complexity based on how many decision points it has. The complexity of a function is 1 (for the function), plus 1 for every if, while, repeat, for, and, and or, plus 1 for every case of a case statement. The suggestion (presumably based on research, not all quoted there) is that a routine of complexity up to 5 is probably fine, 6-10 might be getting out of hand, and higher than that tends to indicate problems.
Now that measure was first proposed in 1976, and was studied in the context of procedural languages like C and Pascal. How can we modify this measure for an OO program? Well what leaps out at me is that every method call has an implicit if in it! Therefore long stretches of boring procedural code may have very few decision points, but any significant stretch of OO code is going to have a lot. Make a dozen method calls and..oops.
Others may disagree with this heuristic, but I think that there is some degree of validity in some such modification. And it suggests that there may be good reason behind several things which I happen to also believe:
- When writing OO code, short methods matter.
- Coding habits (eg long routines) which are fine in procedural code, can get you into trouble in OO.
- That reading good OO code is line for line harder than good procedural code. (OTOH the OO code can get the job done in less code - if you use it right.)
- Layering abstractions on top of each other has a significant cost. Don't do it unless there is a corresponding benefit that you can point to for justification.