You would probably think of developing a method of measuring technical debt (rather than a simple equation or model) because it must necessarily be project-specific, and the "true cost" of each item (ex. deferred refactoring) is best known by those that would implement them.
Really? I find that a dubious statement, and I'm not going to accept it if you don't add some substance to the statement.
"True cost" of each item not only depends on how often bugs need to be fixed, but also on how many additional features need to be implemented - and that's likely to be a more significant cost. After all, technical debts often comes from taking short-cuts - which lead to less code. Less code leads to fewer bugs, and easier fixes. But it makes it harder to expand the system. However, it's usually not the developer who decides if and how many features will be added. It's the customer (internally or externally).
And this brings up another point. Debt in the financial world is often a well understood quantity. A mortgage lasts 20 or 30 years. Loans between banks will have a known interest rate, and it's known when it will be repaid. Not so with software. I may have some idea how much "technical dept" a project creates, but often it's not clear at all how long it will last. A piece of software may be obsolete within a year. It may last 20 years. And even if it lasts 20 years, there may not be demand for change.
Quantifying debt (and its risks) means you have to have a good idea of the future. Financial institutions do that with contracts and collaterals. But that doesn't translate to software.