|Syntactic Confectionery Delight|
Re: Optimizing into the Weird Zoneby BrowserUk (Pope)
|on Aug 12, 2003 at 14:14 UTC||Need Help??|
As noted, any optimisations applied at the level of perl (5 or 6) source are rarely, if ever, going to be affected by the phenomena described.
As almost every clause of every line of a HLL language like perl results in
As for "What does it mean". It's an interesting phenomena, and one which means that a lot of clever people are going to spend a lot of time analysing the effects, categorising the edge cases and developing heuristics for use in compilers targeted at pipelined processors, to ensure that optimisations applied by that compiler 'do no harm'.
Individually, the benefits of these pipeline and cache dependant optimisations are almost immeasurable with todays processors. It is only when testing large-scale, CPU intensive applications compiled with these optimisations enabled that the benefits really come to fruition. If you are running a derivatives regression analyser written using a spread sheet that was compiled using a compiler that correctly optimises to the cache and pipelining used by the processor you are running on, and you get the answer to your "is this a short-term inflection" a few minutes or even seconds before your competitors, you'll grateful for the efforts of those clever people.
Processor manufacturers are almost duty bound to discover, analyse and develop the heuristics so as to ensure that compiler manufacturers have the wherewithal to show their processors off to the best effect. People buying servers rarely consider buying a slower one because it makes the job of compiler source code maintenance easier. They want throughput.
I for one am very glad that a lot of very clever people have spent an inordinate amount of time optimising the perl source code. What is the big deal about the perl 5 regex engine? In a word, speed. I benefit from their efforts every time I write a script. As does every other person that uses it. Including all those that eschew the benefits of optimisation. Anyone who has looked at the perl source has to realise that it isn't the way it is, for the benefit of ease of maintenance.
And therein lies the rub. If your code will only ever be used for one purpose, and that purpose is satisfied by the code you write, you have done a good job. However, if your writing re-usable code, (eg. anything that goes on CPAN) then you have to consider the uses that your code may be put to in the future as well as the use you are writing it for. As it is impossible to predict the operational requirements, including the performance criteria, of every application that it might be put to , then the only way you can do justice to those that will use your code in the future is to do your best to make it optimal for the purpose for which it is designed. "Optimal" does not only cover performance, but it doesn't exclude it either.
Obviously, optimising your code to the point that you introduce bugs does no one any favours. It's a trade off in which correct code wins every time--but to use more memory than you need to, when a slightly restructuring of your code would use less, is just profligate. We could use a bubble sort for everything. It's very easy to write, understand and maintain. We don't. We use complex, recursive, often highly optimised algorithms, to the detriment of ease of coding and maintenance, for one purpose. Performance.
Any buyers for a version of perl that is restructured for the sole criteria of ease of maintenance--everything neatly indented, and properly abstracted--if it runs half as fast? What if its 5 times slower?
Examine what is said, not who speaks."Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
If I understand your problem, I can solve it! Of course, the same can be said for you.