Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

Perl 5 Optimizing Compiler, Part 5: A Vague Outline Emerges

by Will_the_Chill (Pilgrim)
on Aug 30, 2012 at 07:24 UTC ( [id://990666]=perlmeditation: print w/replies, xml ) Need Help??

Greetings Monks,

After ongoing and extensive discussion with Nick Clark, y'all, and many others, a vague project outline is emerging. Its edges are a bit hazy and not clearly defined against the background of existing Perl development, but the lack of clarity will be resolved as we move forward with more focused research and planning.

As pointed out independently by Nick Clark and BrowserUK, I think it is helpful and logical to see our job as 3 phases: parse code written in Perl 5 to produce optrees or equivalent, convert optrees to some new intermediate form, and efficiently execute the new intermediate form.

I am currently seeing 2 simultaneous development paths: one project focused on rewriting existing Perl internals to produce backend code for LLVM or the equivalent (basically PONIE redux); and another project focused on working outside the existing Perl internals. I will refer to these as the "guts" and "non-guts" approaches, respectively.

I believe these 2 parallel projects are primarily defined by their adherence (or not) to the "Perl defines Perl" maxim for phase 1 of the 3-phase approach: the Perl 5 parser. If we use the existing Perl 5 parser, then we are keeping to "Perl defines Perl" and thus our development is focused on rewriting Perl to run on a new internal engine. If we are using a new Perl 5 (subset) parser, like Flavio's Perlito or Ingy's C'Dent/Perl5i, then we are basically free of the existing Perl 5 internals and can translate the parsed application code into another high-level language or LLVM or whatever.
"The #1 rule of gov't spending: why build one when you can build two at twice the price?" (Hadden, Contact)
I'm not necessarily trying to build 2 separate compilers, I'm just hedging my bets because we aren't yet sure if either approach will actually work.

The Guts Project

Considering the extensive LLVM-related discussion of my previous thread, it seems fairly obvious that LLVM should probably be the first new backend we target for the guts approach. In fact, a few people have already tentatively committed to working on a Perl5-on-LLVM project, so although this is not set in stone, it is certainly set in mud at least. If LLVM doesn't work out, other potentially viable backends may be the Java VM, Mono/CLR *shudder*, or maybe even ye olde Parrot. But I think LLVM is definitely the best place to start.

The Non-Guts Project

The two best options I see for the non-guts approach are Ingy's C'Dent/Perl5i and Flavio Glock's Perlito. Since Flavio seems to be busy at Booking.com and Ingy is actively working with me, then it will likely be Ingy FTW. I just booked a flight for Ingy to come to Austin on September 18-19, and I will probably set up a video chatroom for anyone who wants to join in our discussions. Likely backend code generation for the non-guts approach may be LLVM, Javascript, RPython, or even good ol' XS. This has yet to be determined.

Synthesis

I find it likely one or both of the initial approaches may not achieve 100% of the stated goal of making Perl perform within an order of magnitude of optimized C. However, I also find it quite likely that one of them may well succeed, and at the very least show us what we are doing wrong and point us in the right direction for further development. If LLVM doesn't work out for the guts approach, we try a different backend; the same principle applies for the non-guts approach. In the end, we may find our two approaches merged back together again somehow, and hopefully integrated into the official Perl distribution for everyone's benefit.

Help Wanted

Besides working out detailed development plans, our biggest issue at this point is personnel: who is qualified and available to work on this? If you may fit the bill, I've got some basic questions for you...

1. What is your personal interest in a Perl 5 optimizing compiler?
2. What part of the project do you feel qualified to work on?
3. What XS code have you written?
4. What is your level of familiarity with Perl internals?
5. Are you more interested in being sponsored to work hard on this project, or just volunteering some of your spare time?
6. Are you in a position to provide some initial pro bono coding effort?

Many thanks to everyone who has contributed to the discussion and efforts thus far. Exciting times!

Thanks,
~ Will
  • Comment on Perl 5 Optimizing Compiler, Part 5: A Vague Outline Emerges

Replies are listed 'Best First'.
Re: Perl 5 Optimizing Compiler, Part 5: A Vague Outline Emerges
by dave_the_m (Monsignor) on Aug 30, 2012 at 13:53 UTC
    Ok, so I'm going to do the "pouring cold water on the project" thing. Sorry.

    My overall conclusion is that this project is completely unrealistic, in terms of the technical challenge, the expected speed improvements, the timescale, and the availability of suitably qualified labour.

    12 years ago, perl was at a crossroads. The experimental B:: suite had been written, and the experience gained from that showed that achieving significant performance gains was near impossible using the existing creaking perl core. After some mug throwing, the perl6 and parrot project was born. The Perl language would be redesigned to be more optimiser-friendly (e.g. optional types), while a new parser could target a proper bytecode, executed by a modern new VM. This new infrastructure would then give us the underlying system that facilitates doing all the sexy new compilerly/execution stuff like SSA, JIT, etc.

    There was lots of initial enthusiasm, lots of volunteer manpower, and timescales of a year or two bandied about (IIRC). Even at the time the estimates seemed unrealistic to me, but who was I to say? Now, 12 years later (admittedly after personality clashes and fallings out etc), we still don't have a complete, production-ready system. This isn't meant as criticism of the parrot and perl6 teams, but rather to point out just how hard it is to re-implement something as complex and subtle as the perl syntax and semantics. Remember, just the lexer in perl5 is 12,000 lines of C code.

    So I think we're about to repeat the same perl6 exercise (with admittedly more modest goals), with the same under-appreciation of its difficulty.

    So, that was a general overview. Now onto more specifics. First, LLVM. I like LLVM, I think it's a cool system. If I was going to write perl from scratch now, I would very seriously consider using its IR as the target of the compiler. The issue here though, is how easy it would be to retro-actively make perl use LLVM in some way or another. I think you get a sliding scale, with the achievable bits giving small gains, and large gains only (possibly) possible by huge changes to the existing perl core: basically rewriting large parts of it (i.e. a project of parrot/perl6 scale).

    There seem to be three ways suggested of using LLVM. First, there's the trivial sense of using it as a better C compiler. Anecdotal evidence indicates that it might give you 5-10% speedup. But that's not really what we're discussing here.

    Secondly, there's Yuval's suggestion, which in its most basic (and thus 'doable' form) is to basically change the API of the pp_* functions to allow the runops loop to to be unrolled, and avoid the overhead of perl stack manipulation. It may also then allow LLVM to better optimise the resulting call tree, e.g. hefting stuff up to a higher level.

    Looking at the purely loop unrolling / stack side of things, I did a rough calculation to see what improvement could be expected. I ran the following perl code: $z = $x + $y * 3; ($x and $y contain ints) in a loop under cachegrind, and used the results to estimate approximately what proportion of the execution is spent on loop/stack overhead. The answer was approx 20%. Note that the example above was chosen specifically to use simple perl ops where the overhead is most likely to dominate. Perl has a lot of heavyweight ops like pp_match(), where the relative overhead is going to be much smaller. Also, the LLVM verison will have an overhead of its own for setting up args and the current op, just hopefully less than the current overhead. So that means that overall, the loop/stack stuff is going to give us something less than 20% speedup (and probably a lot less in practice, unless all your perl code does is lots of lightweight stuff like additions). Set against this there might be improvements from LLVM being able to compile the resulting code better, but my speculation is that it won't be significant.

    Yuval then listed some more speculative improvements that could then be built upon that basic stuff, but I still think the performance gains will be relatively modest. The advantage of the Yuval approach is that it should be achievable by relatively little effort and by relatively modest changes to the existing perl source.

    The third thing to do with LLVM, (which is what I think BrowserUK is advocating, but I may be wrong), is the wholesale replacement of perl's current runtime, making perl's parser convert the op tree to LLVM IR "bytecode", and thus making a perl a first-class Perl-to-LLVM compiler, in the same way that clang is a first-class C-to-LLVM compiler. This would be a massive undertaking, involving understanding all the subtleties and intricacies of 25,000 lines of pp_* functions, and writing code that will emit equivalent IR - even assuming that you kept the rest of perl the same (i.e. still used SVs, HVs, the context stack, pads etc).

    If you were to go the full hog and replace the latter with something "better", you're then into full perl6/parrot territory. You're at the point of rewriting most of perl from scratch: the point that I said earlier I might have started from if had to do over perl again from scratch. But I think that unless you wholesale throw away SVs, pads, etc, you're not going to see game-shifting performance gains. So I think this option can be summed up as "yes,I would have been nice if perl had originally been written to target LLVM, but wasn't, and the cost of retro-fitting is prohibitive".

    Now we get onto "writing a perl compiler that targets something like javascript, which will be fast, because lots of money is being spent making javascript fast".

    I see two main issues with this. Firstly, no one is ever going to write a perl parser that can fully parse perl and handle all it's subtleties. The best you can do is to parse the easy 90% and ignore the edge cases. But "Ah", I hear you think, "90% is good enough for most people. My code doesn't use ties or overloading". The problem is that within the 10% not covered, there will be 1% that your code does in fact use. "Your code uses /(?{})/? That's a shame.". If you're lucky, you'll get a compile-time error telling that feature X is unsupported. If you're unlucky (and you will be unlucky) your code will silently do the wrong thing due to some subtlety of auto-vivification, localization or stash manipulation or whatever.

    Secondly, any impedance mismatch between perl and javascript is going to give you agonisingly slow performance. For example, if the full semantics or perl hashes can be provided by javascript dictionaries or objects say, then you can directly translate $foo{bar} into foo.bar or whatever. If however, the javascript facilities aren't suitable, then you might end up having to implement a Perl-style hash using javascript code, which is going to be slow. Also what tends to happen in these sorts of conversions is that the early proof-of-concept work (which uses a javascript dict say) works well and is really, really fast. Then you reach the point where you've done 50% of the work and its going really well, Then you get round to implementing the 'each' op, and suddenly realise that it can't be done using a javascript dict. So you switch to the slow route. NB: the hash is just a hypothetical example, which may or may not be a problem in javascript. The real point is that there are lots and lots of things in perl which have odd semantics, that can be superficially implemented in the "fast" way, but which may have to switch to a slow approach once a wider subset of its semantics are implemented.

    That's the end of my rants about specific suggestions. I'll just add a few final general comments.

    No matter what fancy tools or rewrites you use, you're always going to have some theoretical constraints on performance due to the nature of the perl language. For example, method dispatch in perl is always going to be cumbersome and un-optimisible, due to the way dispatch is based on point-in-time lookup in a stash. Yes, you can do clever tricks like caching, but perl already does this. There may be further tricks no-one's got round to thinking of yet (or at least not implementing yet), but such tricks are likely to be as applicable to the current perl code as to any LLVM or javascript variant. This really boils down to the fact that to see really significant performance changes, you have to change the perl language itself. Add a new type system, object system, or whatever.

    The general experience so far of doing "clever stuff" in other languages like Python, often with considerable funding and resources, has been a catalogue or either failure, abandoned projects, or disappointing speedups. It doesn't necessarily follow from it that no-one should ever attempt such a thing again, but it does indicate that doing this sort of thing (and getting it right) is really hard. And the general impression I get from reading up about what "clever stuff" people have already been trying with perl (e.g. Yuval and LLVM), is that major performance improvements weren't the main consideration for the project.

    Note that this is the last contribution I intend to make on this topic for the moment; I've already spent waaaay too much time reading up and discussing it.

    PS: For anyone familiar with the "Big Talk" sketch in the UK comedy series "That Mitchell and Webb Look", I feel that this whole discussion has been rather similar to its "Come on, boffins!" ethos.

      The third thing to do with LLVM, (which is what I think BrowserUK is advocating, but I may be wrong), is the wholesale replacement of perl's current runtime, making perl's parser convert the op tree to LLVM IR "bytecode", and thus making a perl a first-class Perl-to-LLVM compiler, in the same way that clang is a first-class C-to-LLVM compiler. This would be a massive undertaking, involving understanding all the subtleties and intricacies of 25,000 lines of pp_* functions, and writing code that will emit equivalent IR - even assuming that you kept the rest of perl the same (i.e. still used SVs, HVs, the context stack, pads etc).

      You're about 1/3rd of the way to what I was trying to suggest as a possibility. I'm going to try again. I hope you have the patience to read it. I'm going to start with an unrealistic scenario for simplicity and try to fill in the gaps later.

      Starting with a syntactically correct perl source that is entirely self-contained -- uses no modules or pragmas; no evals; no runtime code generation of any kind -- there are (notionally, no matter how hard it is linguistically to describe them; or practically to separate them ), three parts involved in the running of that program:

      1. The parsing of the source code and construction of the perl internal form -- call it a tree or graph; bytecode or opcodes -- for want of a term and some short-hand, the Perl Code Graph (PCG).

        Part 1 cannot be changed. it *is* Perl. So segregate it (I know; I know) out into a separately compiled and linked, native code unit.

        A dll (loaded by the minimal perl.exe much as perl5.x.dll is today), that reads the source file and builds exactly whatever it builds now, and then gets the hell out of dodge, leaving the PCG behind in memory.

      2. The interpreter proper -- the runloop -- that processes the PCG and dispatches to the Perl runtime (PRT).

        Moved below, because I need you to understand the context above and below, before the description of this middle bit will make sense.

      3. The Perl runtime -- the functions that do the actual work.

        Part 3 is very hard to re-code, as much of the behavioral semantics of perl is encapsulated entirely within it.

        So, give the whole kit&caboodle -- all the pp_* source code and dependencies -- to LLVM using its C front end, to process into LLVM intermediate form (IF), and then pass that through the various IF optimising stages until it can do no more, and then write it in its optimised IF form to a file (PRT.bc).

        This process is done once (for each release) by "the devs". The optimised PRT.bc file is platform independant and can be distributed as part of the build -- at the risk of the hackles it will raise including mine -- a bit like MSCRT.dll, but platform independent.

        This single binary file contains all the 'dispatched to' functions and their dependencies, pre-optimised as far as that can go, but still in portable IF form.

      Part 2. The only new code that needs to be written. But even this already exists in the form of -MO-Deparse.

      New code is adapted from Deparse. It processes the PCG in the normal way, but instead of (re)generating the Perl source code, it generates a LLVM IF "copy" of the PCG it is given. Let's call that the LLCG.

      The LLCG is now the program we started with, but in a platform independent, optimisible (also platform independent) form that can be

      • Saved to disk in any of LLVMs IF forms -- text or binary -- and passed to other platforms, or reloaded from disk at a later stage.
      • Or it can be passed directly to the LLVMI (IF interpreter) along with PRT.bc, to be interpreted immediately.
      • Or it can be passed, along with PRT.bc, to the LLVM JIT, and it can generate the native platform code, with optimisations, that is then executed.

      I hope that is clearer than my previous attempts at description.

      • All of the above is possible.
      • None of it requires starting from scratch.
      • None of it means changing Perl's syntax or discarding any of perl's features.
      • It doesn't even require the discard of the existing Perl runloop or runtime.

        Distribute the PRT in fully linked binary form (ie. perl5.x.dll/.so, albeit with some of its current contents split out), and you effectively have bog standard perl.

        It would require a command line switch to invoke the LLVM stuff.

        Very little new code is needed. Essentially, just the generation of the LLCG from the PCG, and half of that already exists.

      • It is a very low-risk, low-initial effort strategy.

      I'm fully aware that perl frequently reinvokes the parser and runloop in the process of compiling the source of a program, in order to deal with used modules and pragmas and BEGIN/CHECK/UNITCHECK/INIT/END blocks. Effectively, each alternation or recursion would be processed the same way as the above standalone program. If the module has previously been save in .bc form, the parsing and PCG->LLCG conversion can be skipped.

      The first step, and perhaps the hardest part getting started, would be the re-engineering the existing build process -- and a little tweaking of the source files -- to break apart the code needed for parts 1 & 2 from part 3, so they can be built into separate dlls -- and the latter into PRT.bc. This process may result in some duplication as perl tends to use internally some of the same stuff that it provides to Perl programs as runtime.

      These modifications to the build process and splitting out of the parser/PCG generation from the runtime could be done and used by the next release (or the one after that) of the existing Perl distribution. without compromising it.

      It would not be trivial and it would require some one with excellent knowledge of both the internals and the build process -- ie. YOU! -- but it wouldn't be a huge job, and it needn't be a throwaway if all the rest failed or went nowhere. It might even benefit the existing code base and build system in its own right.

      I'm done. If that fails to clarify or persuade, so be it. I'll respond to direct questions should there be any, but no more attempts to change anyones mind :)

      In the unlikely event you read to here, thank you for your time and courtesy.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      RIP Neil Armstrong

        I understand that strategy, and it's probably easier to start to see results than most other approaches, but keep in mind a few drawbacks:

        • You have to compile every part of the system to LLVM bitcode, including the Perl core, any modules you use, and every XS component. This is cacheable, but you still have to do it.
        • Before running a program, you have to link all of the bitcode together into a single image. This image will be large.
        • Before running a program, you really want to run as many of LLVM's optimization stages as possible. This takes time and memory.
        • You may be able to emit native code which represents only that program and execute that. Essentially you're replicating what PAR does, with all of the caveats about program analysis and requirement discoverability.

        I expect the resulting binaries to be huge. I expect the link and optimization steps to take a long time. I expect you'll have to keep the Perl 5 interpreter around anyway because there are things you just can't do without either rewriting the language (to get around the BEGIN/import symbol declaration dance, for example. I don't know if you have to include any LLVM runtime components.

        I can imagine that you can optimize some programs with this approach, but I don't know that the intrinsic overhead in the Perl 5 VM you get is more than 10%. Maybe link-time optimization can cut out another 10%. Part of that is the inherent flexibility of Perl 5's design, and part of that is that LLVM is at heart a really good compiler for languages that act like C++.

        (Damn, I said i wouldn't be responding further...)

        New code is adapted from Deparse. It processes the PCG in the normal way, but instead of (re)generating the Perl source code, it generates a LLVM IF "copy" of the PCG it is given. Let's call that the LLCG
        This is the bit I don't currently get. take a piece of code like
        $m = ($s =~ /foo/);

        Which is compiled to an optree that looks like:

        7 <@> leave[1 ref] vKP/REFC ->(end) 1 <0> enter ->2 2 <;> nextstate(main 1 p:3) v:{ ->3 6 <2> sassign vKS/2 ->7 4 </> match(/"foo"/) sKPS/RTIME ->5 - <1> ex-rv2sv sK/1 ->4 3 <#> gvsv[*s] s ->4 - <1> ex-rv2sv sKRM*/1 ->6 5 <#> gvsv[*m] s ->6
        In general terms, what would the IR look like that you would convert that into?

        Dave.

      Secondly, any impedance mismatch between perl and javascript is going to give you agonisingly slow performance. For example, if the full semantics or perl hashes can be provided by javascript dictionaries or objects say, then you can directly translate $foo{bar} into foo.bar or whatever. If however, the javascript facilities aren't suitable, then you might end up having to implement a Perl-style hash using javascript code, which is going to be slow. Also what tends to happen in these sorts of conversions is that the early proof-of-concept work (which uses a javascript dict say) works well and is really, really fast. Then you reach the point where you've done 50% of the work and its going really well, Then you get round to implementing the 'each' op, and suddenly realise that it can't be done using a javascript dict. So you switch to the slow route

      This rings so true.

      I've seen the same thing several times in Perl 6 compilers that targeted other high-level languages (for example kp6 comes to mind, which had a Perl 5 backend). It became so slow that hacking on it wasn't fun anymore, so people stopped hacking on it. (I think perlito descended from kp6 though).

      Rakudo had the same problem, parrot's object system didn't fit it. So it had a huge rewrite switching to a custom object system, which only worked because much of it could be written in C. If it had to be done on top of parrot primitives, it could have never worked with decent speed.

      And the problem is, there are so many fundamental operations that have subtle differences between both Perl 5 and Perl 6 and possible target languages like Javascript: routine invocation (think of @_ elements being aliases), method dispatch, hash and array access (think of autovivification, or what hashes return in scalar context in Perl 5). You might be able to take the speed hit from adapting one of them the right semantics, but all of them put together will kill you.

      I am now convinced that targeting a high-level language like javascript isn't going to result in a speedup. I hope somebody proves me wrong eventually.

        I am now convinced that targeting a high-level language like javascript isn't going to result in a speedup.

        FWIW: I reached a similar conclusion.

        Which is why I think the only game in town is (a) Low Level Virtual Machine.

        I'm not (yet) convinced that it can work its magic on Perl5; but I think it would be (have been?) the best possible underpinning for Perl6.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        RIP Neil Armstrong

        p align=right
        moritz: flavio glock's perlito already was able to parse and compile a fast subset of perl and execute it on nodejs (javascript) 3x faster than perl itself, and also with sbcl (a fast common lisp).
      Looking at the purely loop unrolling / stack side of things, I did a rough calculation to see what improvement could be expected. I ran the following perl code: $z = $x + $y * 3; ($x and $y contain ints) in a loop under cachegrind, and used the results to estimate approximately what proportion of the execution is spent on loop/stack overhead. The answer was approx 20%. Note that the example above was chosen specifically to use simple perl ops where the overhead is most likely to dominate. Perl has a lot of heavyweight ops like pp_match(), where the relative overhead is going to be much smaller. Also, the LLVM verison will have an overhead of its own for setting up args and the current op, just hopefully less than the current overhead. So that means that overall, the loop/stack stuff is going to give us something less than 20% speedup (and probably a lot less in practice, unless all your perl code does is lots of lightweight stuff like additions). Set against this there might be improvements from LLVM being able to compile the resulting code better, but my speculation is that it won't be significant.
      You are comparing the legacy Perl 5's performance with legacy Perl 5. I dont think Will wants to just recompile Perl 5 the C program with Clang or write yet another B::C.

      The third thing to do with LLVM, (which is what I think BrowserUK is advocating, but I may be wrong), is the wholesale replacement of perl's current runtime, making perl's parser convert the op tree to LLVM IR "bytecode", and thus making a perl a first-class Perl-to-LLVM compiler, in the same way that clang is a first-class C-to-LLVM compiler. This would be a massive undertaking, involving understanding all the subtleties and intricacies of 25,000 lines of pp_* functions, and writing code that will emit equivalent IR - even assuming that you kept the rest of perl the same (i.e. still used SVs, HVs, the context stack, pads etc).

      Why keep the pp_* and SV/contexts/pads if they aren't needed? Some CVs can be determined to be optimizable by SCA in "LLVM Perl". If a CV can't be optimized (eval string exists), the CV stays as slow "legacy" CVs. Easy things get faster, hard things stay slow. The point is to optimize "typical" Perl code, not optimize the performance of JAPHs.
        You are comparing the legacy Perl 5's performance with legacy Perl 5. I dont think Will wants to just recompile Perl 5 the C program with Clang or write yet another B::C.
        No, I am specifically trying to evaluate the effect of the "basic" Yuval approach of JIT converting a CV's ops tree into a list of calls to modified pp_* functions, that then get compiled to IR. That approach seemed to be what Will was advocating. Certainly it was the starting point for the LLVM discussion.

        Dave

      Ok, so I'm going to do the "pouring cold water on the project" thing. Sorry.

      Please put on pants!

Re: Perl 5 Optimizing Compiler, Part 5: A Vague Outline Emerges
by flexvault (Monsignor) on Aug 31, 2012 at 11:57 UTC

    Dear Monks,

    Personally, I think Will_the_Chill has done an amazing job in 17 days of getting/digesting/presenting a lot of interesting information about a "Perl 5 Optimizing Compiler". Whether the outcome is 5% or 300% or whatever, only time will tell.

    Will_the_Chill introduced himself as a businessman, CTO of an Austin company, and has the business need to see this succeed. I personally think that the success of this project could move Perl back into the mainline of IT.

    Before even seeing a single line of Perl code, I sat in meetings where different languages were discussed, and it was the bias against Perl that made me wait to start using it. On the negative side was always the lack of a 'compiler'. I have found lots of ancient posts on PM and elsewhere for the need for a compiler and all the 'technical' reasons why it won't work!

    But maybe this time it will work for business reasons.

    I think the concept of the LLVM backend to be exciting, and reading the pdf that BrowserUk referenced, it seems like a viable possibility. If I read it correctly, what I liked was that LLVM could operate in different modes, and when optimization wasn't possible, it defaulted back to the interpreter mode. That capability may help in the area of 'magic' vs 'not magic' that has repeatedly been brought up in this discussion.

    Just my 2 cents!

    "Well done is better than well said." - Benjamin Franklin

Re: Perl 5 Optimizing Compiler, Part 5: A Muddy Outline Emerges
by Anonymous Monk on Aug 30, 2012 at 07:40 UTC

    A Vague outline? I think its Muddy :D

      Well, I did admit to parts being "set in mud at least". :)
Re: Perl 5 Optimizing Compiler, Part 5: A Vague Outline Emerges
by fglock (Vicar) on Sep 03, 2012 at 17:21 UTC

    1. What is your personal interest in a Perl 5 optimizing compiler?

    This is something I could probably use at work, where I mostly optimize perl code.

    I'm also the author of the perlito compiler. I'm investigating the efficient implementation/emulation of perl semantics on alternative backends. I'm also interested in the parser itself.

    2. What part of the project do you feel qualified to work on?

    Maybe some parsing, code generation, and optimizations.

    3. What XS code have you written?

    Almost none

    4. What is your level of familiarity with Perl internals?

    Almost none

    5. Are you more interested in being sponsored to work hard on this project, or just volunteering some of your spare time?

    I don't know yet, but the project looks really interesting.

    6. Are you in a position to provide some initial pro bono coding effort?

    Sure.

      Could Perlito function something like PyPy does for Python? If Perlito can make an AST, would it be possible to use RPython as a target for that AST?

        Yes, Perlito creates an AST. I've been considering RPython as a possible backend.

        $ node perlito5.js -Cast-perl5 -e ' print "hello, World!" ' [ bless({ 'body' => [ bless({ 'arguments' => [ bless({ 'buf' => 'hello, World!', }, 'Perlito5::AST::Val::Buf'), ], 'code' => 'print', 'namespace' => '', }, 'Perlito5::AST::Apply'), ], 'name' => 'main', }, 'Perlito5::AST::CompUnit'), ]
Re: Perl 5 Optimizing Compiler, Part 5: A Vague Outline Emerges
by linuxkid (Sexton) on Aug 31, 2012 at 03:45 UTC

    I see vestiges of Bloom's!

    --linuxkid


    imrunningoutofideas.co.cc
      Bloom's? What is Bloom's?

        Taxonomy, wiki article

        --linuxkid


        imrunningoutofideas.co.cc

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://990666]
Approved by moritz
Front-paged by moritz
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (5)
As of 2024-04-24 08:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found