I think that it is well-understood where I am coming from, without disparaging comments like “mom and pop” or even “web app,” which at the very least do not advance the argument posed. Let’s all stay on-target here, all of us, and pleasantly agree to disagree. There is no “one” point-of-view here, and it’s possible to speak against a point-of-view without speaking against the person.
To my way of thinking, languages like Perl are most commonly but not always used in situations where raw execution speed is ... irrelevant(!). The processing typically is (I aver...) most likely to be I/O-bound, more dependent on the speed of a network or of a drive or a SAN or what-have-you than on the pure horsepower of the CPU itself. BrowserUK, I specifically acknowledge that your work is an exception to that statement, and very impressive work it certainly is. Nor do I claim that no valid solutions exist for which an optimizing compiler might not be seen as useful. But when I specify blade servers, for what I need them for, I always err on the side of “lots of RAM” and less on CPU-speed. I am delighted to see near-100% CPU utilization but it will be widespread across many processes with my workloads, not a few, and a faster storage-channel is going to be the best buy for me. If CPU utilization falls off, I buy more RAM or look for an I/O bottleneck. If a process has a really-big working set size relative to the other processes in the mix, I look for algorithm-changes to reduce it.
In what I consider to be the general language case, if the speed of the Perl-based application is judged to be inferior, I would assert that an algorithm change is more likely to be successful. Perhaps a very tightly targeted one. Changes to the machine-code generation behavior in the general case might be of no value at all, because the system is literally “hurry up and wait.” and no optimizer can do anything for that.