|We don't bite newbies here... much|
Re^6: Display shortened paragraphby blazar (Canon)
|on Feb 03, 2006 at 11:05 UTC||Need Help??|
My Re^3: Display shortened paragraph reply was intended to discuss the common use of benchmarks at PerlMonks and why a benchmark is a useful tool for illustrating why some coding decisions are made. I'm sorry I didn't make that sufficiently clear to you.
It was sufficiently clear. I don't know if I was clear enough in the first place: I don't know if you share my pov, but there's a premature optimization syndrome going on with a tendency to become epidemic. Taking (also) this into account I think and fully support your claim that "benchmark is a useful tool for illustrating why some coding decisions are made". A tool out of many, whose relative relevance depends on context.
I like to see benchmarks here when they matter. I see a risk in doing them when they don't matter: precisely the risk of spreading a bad practice or a "negative" way to look at code.
I agree that in the strict context of UI computational efficiency in most cases doesn't matter at all. However the real lesson from the benchmark is that specific purpose built in functions perform better than general purpose functions - and that knowledge can be applied all over.
Then again, this is an obvious, logical piece of information. It would be more interesting if the opposite was holding. And in some corner cases it may indeed happen so. Or at least it may be that the general purpose function does not perform significatively worse than the specific purpose one. And in these cases a benchmark is relevant.
To someone who has been kicking around the Perl world for a while and already knows everything there is to know about Perl of course the result of any benchmark comes as no surprise.
I beg to differ: first I'm far from "knows everything there is to know about Perl", and I'm sure there a very few people who even come close to it. OTOH knowing that general purpose functions tendentially will perform worse than specific one is not a matter of big expertise - only of good sense!
Second: I bet that however expert you are there will always be benchmarks whose results will come out as a surprise. Those are likely to be interesting, significative benchmarks.
That is not true for everyone here, and particuarly it it not true for many of the people who ask for help here and those who read through the replies looking for interesting knowledge. Well constructed benchmarks are a valuable resource here, and often the discussion pertaining to how to construct a particular benchmark is valuable too. This sort of, peripheral to the main question, discussion often provides the most useful insight and understanding.
Often... may be. Not in this case, which is the one we're discussing after all. Or similar ones, for what it matters. So you're talking 'bout newbies: but in this case don't you think that, exactly because they're newbies, seeing benchmarks being done everywhere even where they don't matter at all they may take the habit of doing so all the time or to be concerned about premature optimization?!?
Just to make sure... I never claimed: "no benchmarks at all!" - my point was, and is: "no benchmarks when they do not matter at all, please!"