|Syntactic Confectionery Delight|
I agree in principle with most of what you've said, but there are a few qualifiers that I would add:
If I am forced to pick a "most important" measure of quality, I would argue for program completeness and correctness. In other words, the program should correctly do everything that it is supposed to do and do nothing it's not required to do. Once you have correctness and completeness down then, if performance is still an issue, profiling should be considered.
In some cases, such as real time programming, responsiveness is completeness. It's no help to have the brakes activate with the optimal number of newtons of force after the car has already hit the tree. It's better to have a "wrong" solution that applies the brakes a bit too hard or too soft than too late.
Focus on performance after you know the code is right.
In practice, this is easier said than done. What if the fundamental data structures and design decisions are the inefficient part? If the design gets too abstract, this can happen. I've seen it happen many times in the past.
Whenever you need to change the fundamental assumptions of a program, you're either looking a massive refactoring project, or possibly even a total rewrite. Just try telling management that the expensive new program you spent months writing for them is too slow and needs to be totally redone from scratch. That's pink slip territory for a lot of us.
Sometimes, it's better to design efficiently in the first place, and worry about abstraction later. It's nice to solve problems that you might not have, but a clean design that solves your current problem might well be smarter and cheaper in the long run.