|P is for Practical|
Re (tilly) 1: What does "efficient" mean?!?by tilly (Archbishop)
|on Jan 15, 2002 at 22:38 UTC||Need Help??|
As I like to say, Every good thing you can do, has a corresponding cost. What efficient in any domain means is that the trade-offs have been chosen to optimize for your problem space.
But there are four important observations that come up again and again.
The first is that there is a sharp division between things that tend to be hard barriers and things that tend to be soft. Things like raw performance and memory use tend to be hard barriers. If it won't run on your average machine in an acceptable time, your solution isn't acceptable. No arguing. But once you pass this threshold you quickly get into a region where people won't care if you cut memory use in half and doubled performance. While there is a grey area, most programs tend to be pretty squarely be on one side or the other. But things like programmer time are much softer barriers. In that situation you want to do the best that you can on the soft barriers while hitting your minimums on the hard.
The second observation is that few people will be able to predict, before starting a problem, what kinds of requirements and issues you will have. The cases where people do know that tend to be ones where said people have solved that problem (or very similar ones) several times, and so can judge from experience. Therefore up front designs done in a vacuum almost always solve the wrong problems. With rigid development environments, you have to suffer. With more flexible and dynamic ones, it is more appropriate to follow the old machine-gunner's maxim of Ready! Fire! Aim! (ie Constantly adjust your aim based on feedback after you start - in the case of a machine gun you get feedback from where your tracer bullets go.)
The third basic observation is that thanks to Moore's law, the ease of making all of your hard limits is constantly improving. That means that the relative importance of soft versus hard limits changes with each generation of technology. (Which is why it is now feasible to do serious software development in slow bloated languages like Perl.)
And the final observation is that optimizing for maintainability first generally gets good results on all criteria of interest. First of all your code is maintainable. Since most of the cost is in maintainance, that cuts overall cost. Since debugging is a large fraction of initial development time and cost, making that easier cuts initial development time and cost. Since maintainable code is well-modularized, should you run into performance or memory problems, it is generally fairly easy to profile and then optimize a relatively small section of code to improve that. And if all else fails, well you have an easy to read prototype for your rewrite. (All else generally doesn't fail though.)
So efficiency means a lot of things. But whatever it means for you, the odds are pretty good that you can get away with optimizing for maintainability first, and everything else second.