Beefy Boxes and Bandwidth Generously Provided by pair Networks
Welcome to the Monastery

(OT) Proving Productivity?

by Ovid (Cardinal)
on Aug 05, 2003 at 15:38 UTC ( #281028=perlmeditation: print w/replies, xml ) Need Help??

For many people, the average number of lines of code (LOC) per day will seem to be a silly metric. I wonder how it gets measured. For example, if one programmer produces twice as many LOC as another programmer but 5 times as many bugs, is the first programmer more productive? Possibly, if the second programmer has a bug rate about one fifth the average.

For my current project, I determined that I was producing about 150 lines of code a day (more or less). I actually felt pretty happy with that given that the requirements were very sketchy and I had to repeatedly rewrite large portions of the code. I also think the code is reasonably bug free due to a fairly decent set of tests, but what's a success today may prove a failure tomorrow as the client decides that they need something done a little bit differently.

Unfortunately, I don't know if 150 is really productive and some days I found myself with a negative count as I pulled code that wasn't needed. Perhaps function points would be a better metric? I'm not sure. The end customer was playing a shell game with requirements and several times I've made sweeping changes to the code to implement a minor, but very annoying change. How do you really measure personal productivity and, more importantly, how do you justify it to a client? My current client has been fairly understanding about the requirements problem but I do have concerns if issues like this occur in the future.


New address of my CGI Course.

Replies are listed 'Best First'.
Re: Proving Productivity?
by dragonchild (Archbishop) on Aug 05, 2003 at 15:52 UTC
    Productivity should be measured in terms of deliverables. A deliverable is a black box that satisfies a set of requirements (either from a requirements document or the design document based on those requirements) and is completed to a certain degree of error-free-ness by a certain date. Either you produced or you didn't.

    Now, a deliverable can be an entire application or a small function. It can satisfy 100 requirements or part of 1. It can be a customer deliverable or an internal deliverable.

    The reason I think in terms of deliverables is that LOC has nothing to do with the real world. For example, I write code differently depending on a huge number of factors:

    • My expected environment
    • My expected maintainers
    • My expected reviewers
    • My coworkers
    • The style of code I'm allowed to write in
    • How much I have to justify myself to my micro-manager
    I have noticed that my LOC changes, sometimes by a factor of 10-20, depending on those factors. (The last factor usually bloats my LOC by at least a factor of 5, if not more.)

    However, if I can provide a deliverable in the time allotted that satisfies the requirements stated and is 99.999% error-free, I have succeeded. If I cannot, I have failed.

    Remember, you can't be 80% pregnant. Likewise, you can't be 80% productive. Either you produced or you didn't.

    We are the carpenters and bricklayers of the Information Age.

    The idea is a little like C++ templates, except not quite so brain-meltingly complicated. -- TheDamian, Exegesis 6

    Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

Re: (OT) Proving Productivity?
by Abigail-II (Bishop) on Aug 05, 2003 at 15:56 UTC
    Milestones. You need to make a list of milestones that your program must make, then you can measure your progress by the amount of milestones you have reached.

    But that's easier said than done, as some milestones would require a lot more work than others. And it might be quite a task to come up with a useful set of milestones. However, if start with making a design of what you are going to do, and in which order, you already have a start of your list of milestones. And you can make as many milestones as you want, depending on whether you want to make a milestone a week, or have several milestones a day. Also note that you can have several milestones per 'feature'. A milestone for implementing it, another one for writing its test set, and a third for having it pass all tests.


      That sounds wonderful with one little caveat: I've discovered that if I separate tests from the code in any way shape or form, clients say "let's save some money by skipping those tests". Instead of trying to justify the tests, I don't mention tests at all (unless they ask) and just turn in my code with a full test suite. Since I get done on time, there's never been a problem with that route.

      Aside from that, I think next time I might try the milestone route (or "deliverables", as dragonchild puts it). I currently create full estimates and now tha I look at them, my task breakdowns seem to fit milestones quite nicely.


      New address of my CGI Course.

Re: (OT) Proving Productivity?
by dws (Chancellor) on Aug 05, 2003 at 18:13 UTC
    How do you really measure personal productivity and, more importantly, how do you justify it to a client?

    Brother Ovid,

    Having followed along for a few years now as you've related stories about how flakey your clients can be, I respectfuly submit that you are trying to solve the wrong problem by turning this into an issue about your personal productivity. By looking for ways to justify your productivity when you're dealing with a client who can't or won't clearly articulate requirements you are making their problem your problem. That's a recipe for driving yourself nuts.

    The best you can hope to do in the situation you're in is to satisfy their requests in an amount of time that's reasonable to them, at a level of quality that's acceptable to them, and for an amount of money that's reasonable to them. It sounds like you're doing this, but if it leaves you feeling defensive about your productivity, perhaps you need a stabler environment.

    If you insist on measuring something, measure the stability of the requirements. Track requirements changes, and graph, over time, the average number of changes per requirement, and mean-time-until-change. This shifts the focus back where it belongs.

      Heh :) Thank you. Every once in a while, I need to get slapped back into reality. I have come to expect that clients won't give me good information but I'm still expecting myself to give good information in return. You're right. I have to put this burden back on the client. I've preached such things before. I just need eat my own dogfood.


      New address of my CGI Course.

        Hi Ovid, thanks for the brainfood you write that I get to read during my lunch break...

        but... am I just off the planet in this - why hasn't someone mentioned Function Point Counting?

        It's an industry-recognised metrics technique. Of course it may be that this is a very large sledgehammer for a small nut... but there may be a kernel (sorry) of an approach for you. And yes, this is your client's problem as dws pointed out.

        Disclaimer: not a member, nor a knowledgeable FP Counter 8-)
Re: (OT) Proving Productivity?
by gmax (Abbot) on Aug 05, 2003 at 16:18 UTC

    Measurement is evil. It is difficult to implement and whatever you do to enforce a measurement policy you can end up with some unpleasant side effects.

    I agree with Joel Spolsky's article that most measurement practice are counter productive.

    Software organizations tend to reward programmers who (a) write lots of code and (b) fix lots of bugs. The best way to get ahead in an organization like this is to check in lots of buggy code and fix it all, rather than taking the extra time to get it right in the first place. When you try to fix this problem by penalizing programmers for creating bugs, you create a perverse incentive for them to hide their bugs or not tell the testers about new code they wrote in hopes that fewer bugs will be found. You can't win.

    Besides, what is lots of code? If a programmer writes some lengthy and redundant code while another one solves the same problem with a clean subroutine, who's the winner?

    Personally, I have achieved good results by giving the developers full responsibility for an application or a given feature and letting them set the milestones for their work, under my loose supervision. When the feature was released, I gave them full credit with the users. Hubris was a good incentive. It worked. I am not sure it could work everywhere. But depending on the environment it could be better than a policy of strictly checking how many lines of code were written.

     _  _ _  _  
    (_|| | |(_|><
Re: (OT) Proving Productivity?
by simon.proctor (Vicar) on Aug 05, 2003 at 16:08 UTC
    I'd just like to reinforce Abigails comments with my own personal situation. When I first started my job I rarely got requirements, my line manager was happy to give me a verbal list of things it should do (in general) and then make modifications once he had seen the *prototype*. Fortunately I rapidly learned to ask directed questions which meant that I only ever really suffered from feature creep rather than from sweeping changes.

    Nowadays, he has realised that if I build by guesswork he doesn't get what he wants so I do get formal requirements.

    Throughout the both of these periods, however, I would agree what he would get and when. At that appointed time, I would provide the work and he would either be happy (or not). At no time have we discussed lines of code. What you deliver is most important - not how many keystrokes it takes. After all, why would the client care?

    To put it a different way, the industry we work in emphasises code re-use. Once your requirements are more concrete you will (I assume) create a more concrete design to help with code-reuse. So if you spend two days producing 400 lines of code which then gets re-used 40 times throughout the remaining 10 days of your work - how many lines of code does that count for? 40*400?

    I'm also reminded of a dilbert cartoon (can't remember the date so no link) where the phb offers bonuses based on bugs fixed. Spend the first week coding 80 bugs and the next two days fixing them and earn yourself a new boat (or whatever). As you can see, it just doesn't work ;).

    Just my 2p,


      Part of my current problem was that I showed up onsite and my client, who had previously handed me a decent set of requirements, handed me this amorphous blob of partially hand-written notes from a client who really didn't know what he wanted. (One of my programs needed to be able to read his mind and another program had requirements that guaranteed that the other programs couldn't work). I was supposed to get to work immediately, but I needed to nail this down. I need to find better strategies for quickly reacting to situations like this.

      And the Dilbert cartoon, if I recall correctly, that was based on a true story :)


      New address of my CGI Course.

Re: (OT) Proving Productivity?
by Aristotle (Chancellor) on Aug 05, 2003 at 16:23 UTC

    Given the expressiveness of Perl, I think 150 lines of code per day is at least decent. I've never tried to estimate my own productivity (which is difficult anyway when your project is 85% research work), but I know 100 lines of code is a fair amount of work.

    Your key problem is changing requirements. There's really no way estimating your net productivity by looking at the code when the relation between code written and completion percentage gets skewed that way. I think your best bet for that situation is to work with the extreme programming concept of user stories, where a user story is immutable once put down. Any change to it is counted as a new user story, and of course discarded requirements do not result in discarding their count of completed user stories from the total work done.

    So if a change of requirements obsoletes two completed user stories, requires changes to another two completed ones, and adds three new stories, this change has incurred a cost of seven user stories. If another change affects a dozen completed user stories and adds a few new ones, you have maybe 15 new user stories. Now if you start piling those up, the client will hopefully start to understand that their "little changes" are actually causing a serious amount of work..

    Makeshifts last the longest.

      Really? I know that if I start counting lines of code, I go something like this:
      Monday - 450
      Tuesday - 3
      Wednesday - 275
      Thursday - 0
      Friday - 0

      Does this mean I'm only productive on Mondays and Wednesdays? Or, did it mean that I was more productive on the other days cause I was thinking and designing and working on install procedures and user interfaces and the like, none of which involves actual code.

      My opinion is that if you spend more than 20% of your time coding, you are being less productive than you could be, especially in Perl, where 90% of every Perl application has already been written.

      We are the carpenters and bricklayers of the Information Age.

      The idea is a little like C++ templates, except not quite so brain-meltingly complicated. -- TheDamian, Exegesis 6

      Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.

        Ovid was talking about averages. Your example happens to average to roughly 110 LOC/day..

        Not to mention you missed my point, which should have been easily obvious from how much attention I devoted to trying to measure productivity by user stories..

        Makeshifts last the longest.

Re: (OT) Proving Productivity?
by hsmyers (Canon) on Aug 05, 2003 at 17:04 UTC
    The problem with metrics is related to the problem of work estimation. The remark I usually make is something like
    If I've done it before, then my estimate of time will be nearly 100 percent accurate. If I've never done anything remotely like it then my time estimate will be a wild-eyed guess. If between these two extremes, scale the reliability of my estimate accordingly.
    Roughly translated to your situation, a 'good' number of lines coded must also be based on a sliding scale, typically related to the amount of experience you have on similar projects. More to the point on some projects, 50 lines might be a near super-human performance, so there is no hard and fast rule here. The good news is that lines of code as a metric is at least bounded, zero on one end and some largish number on the other (human limits and all that)!


    "Never try to teach a pig to wastes your time and it annoys the pig."
Re: (OT) Proving Productivity?
by johndageek (Hermit) on Aug 06, 2003 at 00:22 UTC
    Metrics are a challenge to make your metric look good.

    Specifications are how a boss expresses his/her misconception of the mis-understanding the committee has of the need of the worker.

    Problem solving takes understanding of the real need, resource limitations, an ability to think outside of the box, apply the KISS method and take into account the human factor.

    Pay a programmer by LOC and you will get lots of LOC. Can you imagine paying a carpenter to build your house by the number of nails he uses?

    Payment/measurement needs to be based upon ROI. If my solution saves a clerk 15 minutes a day ($10 per hour/4 * 5 days per week * 50 weeks) or $625 per year. my solution better not cost more than $1000 or about a year and a half for payback.

    The hardest part of a justification is when there is a substantial framework/infrastructure investment required. Payback periods become longer, and future gains based upon that investment need to be analyzed.

    Why all the meandering? Because we are all paid for the benefit we provide to our customers/employers. Learn to express it in a way that any business person can understand and you will work on more extensive projects and provide more value as well as reap greater benefits.

    Unless you want them to do all the thinking, and try to measure your performance by some number marginaly related to what you do.

    I am now stepping down from the soap box.


Re: (OT) Proving Productivity?
by BrowserUk (Pope) on Aug 06, 2003 at 04:06 UTC

    An interesting discussion may ensue if you suggest that it will take you as long to write the code, as it does your boss /client (or their agent) to write the specification for that code + 2 day per question that you have to ask to clarify the specification + 2 days for each line of existing code effected by the changes arising from your questions.

    It is fairly easy to prove that code length is almost exactly proportional in complexity as a it's full specification and takes around the same amount of time to write down if you exclude delays through arranging meeting with clients, attending meetings with clients and waiting for answers and decisions with clients.

    For each question that the specification is short of being a "full specification", adds the time taken to resolve that question. In some cases, resolving the question can be done

    • from experience
    • by lookup to some external material in your possession.
    • by asking a "client expert".
    • by asking the client to perform research.
    • by performing the research yourself.

    Emphasis that the questions involved are not "coding" or "implementation questions", only "specification", "requirements" and "client specific criteria and knowledge" questions. Ie. Those that you (or anyone not a part of the clients company and processes) could not be expected to answer yourself.

    That all boils down to the idea that if it takes the client 10 days to write down (from scratch) a full specification of the program or system, then it is likely that I can code it in that same time. However, every time I encounter a question that I cannot directly answer from that specification, it will require me to break from the process of coding to reference an external (to myself) source.

    Sometimes, with the items nearer the top of the list, I may "know" the answer from experience, or be able to look it up quickly and easily.

    As you move down that list, the time required to find out the right source of the answer and arrange for contact with that source gets longer. If I need to obtain authorisation for the variation or amendment of specification as a result, the delay will be longer still.

    At the top of the list, the delay maybe on the order of minutes. Towards the bottom the delays may be in the order of weeks. Travelling to the client to obtain information adds further delays, but even a using the phone, finding the right person to answer the question, finding them at their desk, conveying the question to them, understanding the reply, agreeing that you both have the same understanding of both question and answer, finding the "authorised person" to sign-off on the change to the specification that will result, all take additional time. If either party (you or they) have to do research to answer the questions, the delays grow again.

    If you are sat at the same desk on a daily basis as a client agent with the knowledge and authority to answer questions arising, verbally, across the desk, delays are minimal. The further you move from this ideal, the longer they get. An office door between you, longer. A floor of the same building, longer again. Another building, site, town, country (also language!) all exacerbate the problem. Multiple persons to reference, different persons for authorisation, and approvals committees exacerbate further, and don't get me started on the situation where there is NIH syndrome or personality conflict!

    The holy grail of e-mail is that it levels these playing fields, but I've yet to see that happen.

    The ideal situation is that the person with the knowledge to write the specification is the same person with the authority to sign it off. In this case you stand a chance of having them understand the problems involved. They would understand how hard it is to encapsulate ideas into prose that stands up to scrutiny and may therefore have an inkling of how hard it is to convert that prose into code. If your authorising person is a "just give me the bottom line" type, your out of luck.

    Until we have a specification mechanism that is as accurate and unambiguous as an engineering drawing or blueprint, the software equivalents of CAD for producing them & CAM for converting them to code, there will continue to be a 'finger in the air' aspect to drawing up schedules and any 'piecework' metrics like LOC & man-days are as meaningless as the figures you hear that it takes 12 men 1 day to build a BMW 3-series.

    Just the development cycle of a new BMW takes years and costs billions and the cost to the consumer can only be reasonable, by amortising those development costs over million of units, so the costs and time scales of developing software can only be brought down to reasonable levels by the same factors of scale. There will always be customers that are willing to pay for bespoke development, but increasingly these will become the province of the military and medical fields where there is a large mass of emotion that allows cost to be secondary. Even with the military this is tending to move toward generalised solutions to a wide range of problems as is evidenced by things like the HUMVEE, JSF, JDAM and all the other military acronyms that now start with J for "joint". It is even beginning to happen in medicine, with the (very) old style of local cottage hospitals of my youth increasingly giving way to a "centres of excellence" approach.

    Amortisation through component (code) re-use is a first step in making software development a more predictable, measurable process and in this area CPAN is a great strength of perl. Unfortunately, there are still way to many situations where mixing and matching these components is a little like trying to thread an 1/4" BSF nut on a 3/16" BSW bolt using a 12mm spanner. Each component works well for its designed application, but the 'packing' and 'grease' required to make them come together is still an unmeasurable process. Perl's saving grace is that it is wonderful glue, that used correctly requires sparing application.

    How does any of this answer your question? I'm not sure it does, but the trick, if there is one, to cordial and productive client/ coder relationships is a mutual understanding of the problems and a desire to reach amicable solutions, plus some trust. Any such relationship that is based solely upon 'rules and regs', 'tender & contract' or piecework metrics--ie. the need to "prove"--is fraught with problems. If you can persuade your client to truly recognise that, half the battle is won. Then all that's left it to engender trust and write the code:)

    Good luck!

    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
    If I understand your problem, I can solve it! Of course, the same can be said for you.

      The problem with exact specification is that when you have one you don't really need to program. The whole job of a programmer is to disambiguate the specification to make it mathematically exact - so that the machine could execute it. This is in constrast with the design of material things where having a specification you need to put it into a phisical process.

      Update: Ok - perhaps that was too trollish. You cannot treat the specification as a programm it is just a definiton of the programm. You still need to find the programm complying to the definition. But it is very close and writing one might be not more difficult than writing the programm - so that it would be quite common that the programm would be the specification.

        The idea is not to convince the client to render a full specification, but to encourage them to understand--through the expedient of having them consider how much work would be involved in doing so--that giving absolute answers to "how long/how much" question regarding program development, especially in terms of physical metrics, is nearly impossible.

        Milestones are a better metric except that non-developers have a habit of coming up with equations like of 20 milestones, the first 4 were completed in 5 weeks at a cost of X thousand currency units, so the project will take 25 weeks and cost 5X thousand currency units. The budget for the project was only 3 thousand currency units and the projected timescale was 4 months, therefore the project will run wildly over time and budget and we should consider cancelling it now.

        The developer(s) then spend most of the next 4 weeks in meetings trying to disprove the math and justify why the project is not behind schedule, by which time it is.

        If you can turn the onus for precision back on the specification, then you can 'relent' in favour of a compromise that satisfies both parties and perhaps achieve a good working relationship that doesn't fracture on the basis of "You only completed 100 LOC this week!".

        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
        If I understand your problem, I can solve it! Of course, the same can be said for you.

Re: Proving Productivity?
by BUU (Prior) on Aug 05, 2003 at 15:47 UTC
    I have no actual experience in this matter so take what I say with a grain of salt. Anyways, it seems in this case the only metric should be 'time to completion', as in, "With this feature set and this language etc etc, it will take me 6 months to complete it". But I have to admit I have no real idea on how to translate that in to a day by day metric. Perhaps saying "I have 50% of the code implemented 3 months in" or something might help, but that sounds.. tricky. Of course this sort of ties in with the .. fluid .. requirement list as you can just say "adding this feature will take an additional two months" or something and keep a running total.
Re: (OT) Proving Productivity?
by chunlou (Curate) on Aug 05, 2003 at 19:37 UTC

    Basically the similar opinions expressed already.

    Situation one. One spent two to three hours to hammer the client or manager on the requirements (which could span two, three days), four hours on the design (to achieve most minimal design/code spec for the req), 100 hours on coding, which, the entire project, stretched out to two to three weeks (few people work on one thing only the entire time).

    Situation two. A programmer started coding the same day he received his spec, be it by email, post-it note or verbal conversation.

    Often enough, many managers consider the programmer in situation two more productivity. Many people don't think a programmer is working until he's typing code. They don't consider "thinking" as work.

    This is a vicious cycle leading to a lot of rework. Any kind of development always involves iteration. Iteration means each step along the way you make some progress, big or small. Rework means you take one step forward and two steps back.

    Other people already mentions deliverables and milestones as a meaningful yardstick to measure "productivity."

    Eventually one of the meaningful productivity measures I would care is revenue per project. There's no point to crank out a lot of products and make no money. Revenue-per-project productivity cannot be achieved through productive coding alone but through very critical examination on the requirements as well (something often considered "waste of time").

Re: (OT) Proving Productivity?
by roju (Friar) on Aug 05, 2003 at 16:35 UTC
    If you're willing to look at XP (it sounds like you already incorporate tests and have to deal with changing requirements, so it might not be a bad idea to look at their methods..), they use a metric called velocity.

    XP advocacy - it's not just for smalltalk programmers anymore.

Re: (OT) Proving Productivity?
by artist (Parson) on Aug 05, 2003 at 19:39 UTC
    Where I work, the code has never been attempted to measure in number of lines. It is measured in how fast, I can provide the deliverable or provide the worry-free code. I am lucky to have that measured in the intelligence it provides and taking at the attempt do what seems very complicated to nearly impossible at first. It is measured in terms of maintainability and integrating with the entire system in such a way that it doesn't create any scene of problems in future.

    When you code let's say 150 lines, do you include the lines in the modules from CPAN you use or just your lines? I would rather research the whole day,have deeper understanding of the problem and various ways of solving the problem and write the code the next day if I have 3 days available.

    Clients now have to be mature enough to understand the other parameters to consider productivity.


Re: (OT) Proving Productivity?
by bean (Monk) on Aug 06, 2003 at 19:44 UTC
    I'm currently working on combining four classes into one - they all do very similar things and are all cut&pasted&modified versions of each other (although the genius author I inherited it from decided to do things slightly differently in each one, so one is mostly pl/sql and keeps its variables in organized hashes, another performs most of the logic server-side and has tons of class variables instead of hashes, etc., etc.) and I keep having to add identical functionality to each one as requirement change.

    So anyway, I'll be cutting the original 5,300 LOC by at least half, possibly two-thirds (no more, because he liked writing 400 line uncommented functions). It should take about three solid weeks of work - that means I'll have 2,650/15 = -177 LOC/day! It's going kind of slow because I'm teasing the logic out of undocumented, buggy code. So I think LOC is a Load Of Crap (I think maybe my predecessor was paid by the line, and it shows).

    The problem with Productivity with a capital 'P' is it's hugely subject to interpretation. Am I more productive if I grind out crummy code that works quickly? Would the time have been better spent doing it correctly, so that when the requirements change the code is flexible enough to handle it? That's part of the art of productivity - knowing when to grunt out a steaming LOC and when to carve a flawless diamond. The funny part is the better the requirements are to start with, the less the quality of your code matters.
Re: (OT) Proving Productivity?
by TomDLux (Vicar) on Aug 06, 2003 at 02:05 UTC

    If you shorten existing codee, is that not more valuable than when you lengthen it?


Re: (OT) Proving Productivity?
by wolfger (Deacon) on Aug 06, 2003 at 15:19 UTC
    Measuring productivity in LoC is an interesting idea...
    If I include a module, do I get to count all the lines of that module in my productivity? Or do I have to rewrite the module from scratch to be productive?
    Also, since perl doesn't care about carriage returns, I can greatly increase my productivity by typing one character per line :-)

    Believe nothing, no matter where you read it, or who said it - even if I have said it - unless it agrees with your own reason and your own common sense. -- Buddha

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://281028]
Approved by gmax
Front-paged by krisahoch
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others musing on the Monastery: (6)
As of 2021-09-25 01:29 GMT
Find Nodes?
    Voting Booth?

    No recent polls found