|No such thing as a small change|
Re: RFC: Exploring Technical Debtby whakka (Hermit)
|on Sep 24, 2009 at 20:42 UTC||Need Help??|
You would probably think of developing a method of measuring technical debt (rather than a simple equation or model) because it must necessarily be project-specific, and the "true cost" of each item (ex. deferred refactoring) is best known by those that would implement them. Having experience with similar projects would aid in more accurate estimates.
Simplified, the problem can be stated: "Solving debt item D now would take N man-hours. D makes it X% longer to fix certain kinds of bugs on average.* We can expect to encounter these sorts of bugs Y times per year. The average expected time it takes to fix these bugs is Z man-hours. Given the staffing on this project I would weight this figure by W." You should similarly add the man-hours saved for implementing new features. D's per anum opportunity cost (in man-hours) is then the sum of X * Y * Z * W for all such bugs + (similar sum for new features). This can be thought of as future "earnings" for fixing D now.
It then becomes an intertemporal cash flow problem. You must consider the time cost involved to make the fix now versus how much time is involved in repayment for the future. You only consider those years you expect the project to be maintained (the longer the time the less important this estimate is). The ultimate comparison of each item should be the cost of fixing D now (N) vs. the present value of all future productivity losses from D in maintenance and new features (F). If N - F < 0 it's better to fix the problem now, otherwise we should shelve it for later, if ever.
To calculate F you determine a discount rate, a non-trivial task based on a variety of difficult to estimate factors. In general it's the necessary return on investment of paying developers now to be more productive in the future, plus risk (ex. bankruptcy risk of delaying a shipped product, risk of employee turnover, security risks, inflation risk, etc). It's difficult to do this simply and in a suitably general way. If developer time would otherwise go to adding new features on the same product, the value of those features should be used to calculate an internal rate of return on those features (based on an analysis of how much higher the product would sell for or how many more copies it would sell). Similarly, if they would otherwise go to another project, that project's marginal IRR should be calculated and used. Alternatively, the company's WACC may or may not be appropriate. Calculating F is now a matter of running an NPV algorithm in a spreadsheet.
Since each input is an estimate, it would be prudent to repeat this calculation using a variety of values for your estimates, picked based on how confident you are in them (how well their cost is known) and for worst-case scenarios. For example you may consider a steep discount rate if your company's survival is strongly determined by the success of your product as it's shipped in 4 months.
**Updated for clarity.
*X must be determined inside the technical domain, so consulting existing studies might be misleading.