Beefy Boxes and Bandwidth Generously Provided by pair Networks Bob
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re: Perl 5 Optimizing Compiler

by sundialsvc4 (Monsignor)
on Aug 13, 2012 at 13:59 UTC ( #987127=note: print w/ replies, xml ) Need Help??


in reply to Perl 5 Optimizing Compiler

The common-sense assumption that “an optimizing compiler will make it significantly faster” presupposes ... I think, incorrectly, that CPU execution speed is the ruling constraint; that the code is “CPU-bound.”   In my experience, with all business applications, this is almost never actually true.   If the application is, in fact, “I/O-bound,” then it has CPU-time to burn and an optimizing compiler will do nothing useful for it.

(Bear in mind that an ordinarily CPU-bound activity will be forced into I/O-bound characteristics if it consumes so much memory working-set that it is pushed into significant virtual memory paging.)

The essential code-paths within the Perl compiler/interpreter itself are already well-identified and are optimized.   You can therefore write “almost any old thing” in Perl5 and know that it is being executed in a very efficient way.   But the software is still going to spend a preponderance of its time in either an I/O-wait or even an idle-wait.   The one-time overhead of JIT compiling, and the sustained overhead of the interpreter, is time that the CPU can afford to lose.

I therefore suggest that you need to instrument your existing programs, starting with the ones that are most business-critical and/or that seem to be causing the most business pain.   Why are they waiting, what are they waiting on, and where in the code do the waits occur.   Look most closely at the algorithms, and expect to have to redesign them.   As Kernighan & Plauger said in The Elements of Programming Style:   Don’t “diddle” code to make it faster; find a better algorithm.   Do not assume that the CPU is running “round objects to the wall” because it undoubtedly is not and never could.   Do not make the ruling assumption that an optimizing compiler will do for you diddly-squat, because it probably won’t.

Certainly, there are bona-fide edge cases, throughout the Perl system and its libraries, that are properly handled right now with “XS” code, in C or C++.   These are the hot-spots and they feature heavy bit-twiddling; exactly the sort of thing that these languages are best at.   You might find a hot-spot in your app that could be dealt with in this way, but I rather doubt it.


Comment on Re: Perl 5 Optimizing Compiler
Re^2: Perl 5 Optimizing Compiler
by Anonymous Monk on Aug 13, 2012 at 14:14 UTC

    The common-sense assumption that “an optimizing compiler will make it significantly faster” presupposes ... I think, incorrectly, that CPU execution speed is the ruling constraint; that the code is “CPU-bound.”   In my experience, with all business applications, this is almost never actually true.   If the application is, in fact, “I/O-bound,” then it has CPU-time to burn and an optimizing compiler will do nothing useful for it.

    (Bear in mind that an ordinarily CPU-bound activity will be forced into I/O-bound characteristics if it consumes so much memory working-set that it is pushed into significant virtual memory paging.)

    The essential code-paths within the Perl compiler/interpreter itself are already well-identified and are optimized.   You can therefore write “almost any old thing” in Perl5 and know that it is being executed in a very efficient way.   But the software is still going to spend a preponderance of its time in either an I/O-wait or even an idle-wait.   The one-time overhead of JIT compiling, and the sustained overhead of the interpreter, is time that the CPU can afford to lose.

    I therefore suggest that you need to instrument your existing programs, starting with the ones that are most business-critical and/or that seem to be causing the most business pain.   Why are they waiting, what are they waiting on, and where in the code do the waits occur.   Look most closely at the algorithms, and expect to have to redesign them.   As Kernighan & Plauger said in The Elements of Programming Style:   Don’t “diddle” code to make it faster; find a better algorithm.   Do not assume that the CPU is running “round objects to the wall” because it undoubtedly is not and never could.   Do not make the ruling assumption that an optimizing compiler will do for you diddly-squat, because it probably won’t.

    Certainly, there are bona-fide edge cases, throughout the Perl system and its libraries, that are properly handled right now with “XS” code, in C or C++.   These are the hot-spots and they feature heavy bit-twiddling; exactly the sort of thing that these languages are best at.   You might find a hot-spot in your app that could be dealt with in this way, but I rather doubt it.

    So basically you didn't understand that the OP is in the process of reviving efforts to create a real Perl compiler, with the explicit goal of making Perl run at speeds within an order of magnitude of optimized C.

Re^2: Perl 5 Optimizing Compiler
by BrowserUk (Pope) on Aug 13, 2012 at 14:16 UTC
    In my experience, with all business applications, this is almost never actually true. If the application is, in fact, I/O-bound,

    If your "experience" is confined solely to yet-another-shopping-cart applications bolted together coding-by-numbers style from a bunch cpan modules, that may be the case, but "business" covers a great deal more ground that a bunch of mom&pop retail outlets.

    There are a huge number and variety of cpu-bound business applications in this world -- else (for example) there's be no market for about 90% of IBMs hardware offerings -- finance; insurance; oil & gas; pharmaceuticals; entertainment; shipping & logistics; aircraft manufacturer; car manufacture; genomics & agriculture; the list goes on and on.

    Dismissing the need for performance because your web app can't utilise it is extremely short-sighted.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    The start of some sanity?

      I think that it is well-understood where I am coming from, without disparaging comments like “mom and pop” or even “web app,” which at the very least do not advance the argument posed.   Let’s all stay on-target here, all of us, and pleasantly agree to disagree.   There is no “one” point-of-view here, and it’s possible to speak against a point-of-view without speaking against the person.

      To my way of thinking, languages like Perl are most commonly but not always used in situations where raw execution speed is ... irrelevant(!).   The processing typically is (I aver...) most likely to be I/O-bound, more dependent on the speed of a network or of a drive or a SAN or what-have-you than on the pure horsepower of the CPU itself.   BrowserUK, I specifically acknowledge that your work is an exception to that statement, and very impressive work it certainly is.   Nor do I claim that no valid solutions exist for which an optimizing compiler might not be seen as useful.   But when I specify blade servers, for what I need them for, I always err on the side of “lots of RAM” and less on CPU-speed.   I am delighted to see near-100% CPU utilization but it will be widespread across many processes with my workloads, not a few, and a faster storage-channel is going to be the best buy for me.   If CPU utilization falls off, I buy more RAM or look for an I/O bottleneck.   If a process has a really-big working set size relative to the other processes in the mix, I look for algorithm-changes to reduce it.

      In what I consider to be the general language case, if the speed of the Perl-based application is judged to be inferior, I would assert that an algorithm change is more likely to be successful.   Perhaps a very tightly targeted one.   Changes to the machine-code generation behavior in the general case might be of no value at all, because the system is literally “hurry up and wait.” and no optimizer can do anything for that.

Re^2: Perl 5 Optimizing Compiler
by chromatic (Archbishop) on Aug 13, 2012 at 16:08 UTC
    The essential code-paths within the Perl compiler/interpreter itself are already well-identified and are optimized.

    That may be true (it isn't, but it's at least plausible) if you take into account the current design of the Perl 5 VM. Unfortunately, the current design of the Perl 5 VM makes some assumptions which preclude performance.

    It's enlightening to run some of the silly microbenchmarks people use for competitive benchmarking to see exactly why Perl 5 scales so badly on some of them. (See also my response on compact, typed arrays.)

    Certainly, there are bona-fide edge cases, throughout the Perl system and its libraries, that are properly handled right now with XS code, in C or C++.

    Cases also exist where writing XS will make your code slower—and I don't mean writing inefficient XS.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://987127]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (9)
As of 2014-04-17 02:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    April first is:







    Results (437 votes), past polls