in reply to Re^2: Rakudo Perl 6 and MoarVM Performance Advances
in thread Rakudo Perl 6 and MoarVM Performance Advances

Results for a similar 19MB log file on my 2013 Macbook Pro:

Perl 5: 0.68 secs
Perl 6/Grammar: 49.8 secs

so 73 times slower.

  • Comment on Re^3: Rakudo Perl 6 and MoarVM Performance Advances

Replies are listed 'Best First'.
Re^4: Rakudo Perl 6 and MoarVM Performance Advances
by raiph (Deacon) on Sep 15, 2014 at 15:55 UTC
    Aiui your 73X slower result came purely from changing your code, not an improved compiler. Right?

    A variety of related changes to the compiler (and its toolchain) have since landed in the Rakudo, NQP and MoarVM HEADs. Aiui these will deliver significantly better results for both your original and grammar versions.

    The really big upcoming performance breakthrough for P6 will be the "Great List Refactor". The GLR, which has been discussed for years, is expected to have a very big impact on the execution speed of lists, arrays, etc. in typical code, substantially reducing RAM usage too in many scenarios.

    PerlJam recently said he's "working on a TPF grant to pay for having jnthn, TimToady, and pmichaud kidnapped and locked in a room" to do the GLR. Joke or not, it reflects the practical bus number on this (three, imo) and the ideal scenario (all three focusing on the GLR for a few weeks).

    Larry Wall, who has been getting his hands increasingly dirty for a year or so now (hacking on Rakudo and its toolchain), has been preparing for the GLR by reading guts code, profiling, and landing various related changes over the last few months. Larry's recent discussion of some things the GLR will take in to account may be of interest.

    Fwiw, when PerlJam recently wondered aloud "if it's worth adding GLR to S99", Larry responded "I hope to make the term obsolete pretty soon". Here's hoping we'll see a big GLR performance jump this year.

    One can reasonably expect significant further performance improvements every month for years to come. This year has seen Rakudo, NQP and MoarVM gain classic optimization phases, frameworks and tools. Some work taking advantage of these has already happened, so there have already been big improvements many months this year, but a lot of improvement is still ahead of us. For example the MoarVM JIT is in good shape in the sense it works and already slightly speeds up some code, but most of the benefits are yet to come.