|Problems? Is your data what you think it is?|
Re: Perl 5 Optimizing Compiler, Part 4: LLVM Backend?by Will_the_Chill (Pilgrim)
|on Aug 27, 2012 at 21:16 UTC||Need Help??|
Here, see what I've been talking about with Yuval and Reini, the authors of the original Perl5-to-LLVM link I posted...
Yuval & Reini,
Thank you both very much for your enlightened and informative responses copied below.
I am quite excited about the possibility of these ideas you have proposed, in regards to Perl 5 and LLVM.
My important goals are:
1. Significantly Faster Perl 5 Runtime
2. 100% Backward Compatibility w/ All Existing Perl 5 & XS Code
3. Psychological & Political Feasibility For Adoption Into Perl 5 Core
Am I correct in my understanding that your LLVM ideas would, at least, stand a fighting chance of achieve these goals?
Thanks, ~ Will
On Monday, August 27, 2012 at 5:29 AM, Reini Urban <email@example.com> wrote:
It's not a LLVM problem. Perl is not jit friendly, even less than python. Yuval and my idea was to split up 20% of some important generic ops into typed and directly callable ops. With LLVM it would be easy to optimize it then. But there are also other possibilities.
On Aug 27, 2012 11:24 AM, "Will Braswell" <firstname.lastname@example.org> wrote:
Tell Yuval I said "howdy from Texas"! I just sent him an e-mail at his email@example.com account.
Here is a link I just got from Nick Clark, saying that LLVM was not good as a backend for dynamic languages like Perl...
Maybe I should ask the LLVM people directly, what is needed to upgrade LLVM to work with dynamic languages?
Thanks, ~ Will
On Monday, August 27, 2012 at 6:37 AM, Yuval Kogman <firstname.lastname@example.org> wrote:
Well, in the while that had passed since I wrote that rant not much has changed.
The most pressing issue is to determine really how much it's worth it, what can it give us.
In my opinion there are two things a high quality optimizer can give us, the less important one is to give decent performance to code for which Perl is considered inappropriate now. The second is psychological and is derived from the first Good Thing, which is that the barriers towards writing clean code (the misconception that slow is necessarily bad or that certain things are too expensive to really work) can be brought down (optimize microoptimizations out of the community ;-)).
In short, I believe the most effective optimizer for Perl would do very effective microoptimizations across the board, favouring things that are perceived to be a problem. I don't think these are actual problems because you can always reach for embedded C, PDL, or another language to work around Perl's weaknesses, using the right tool for the job is part of the Perl philosophy, and Perl is just not the right tool for crunching lots of numbers for example.
I'd really like to hear your thoughts on this dillema, I think when people say "optimizer" they often have very different ideas for what they'd expect, but people don't really notice how their ideas are different...
Anyway, getting to the practical... To vet my idea we'd need some examples of the first and the second type of code we'd think would benefit. For example arithmetic benchmarks for the first (but really it should be a broader scope than just arithmetic) and stuff that is currently considered a bit taboo in Perl but that we'd like to be fast enough (for example instantiating a ton of small objects or dividing code up into really tiny functions).
Then we can check how much of a positive impact LLVM can have by hand-refactoring only the necessary opcode PP functions for the benchmarks.
I think this won't take very long, and by the end of it we can get a better idea of the potential of LLVM to optimize perl code only by refactoring perl core, without any incompatible changes.
Generally speaking for any JIT implementation, Perl's stack operations and extra indirection necessitated by the current implementation of opcodes and the runloop are going to be a bottleneck so this would be a worthy endeavor even if not targetting LLVM.
If we decide LLVM is worth it, I think the most effective way would be to write code to refactor the PP operations for us as necessary. It shouldn't be too hard and should probably handle most of the trivial cases (i.e. consistent use of macros in just the top scope with no conditionals on stack manipulation) leaving the humans to handle the complex cases.
Anything that happens later is a total crapshoot, so I think that would be a good start.