Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re^11: Modernizing the Postmodern Language?

by chromatic (Archbishop)
on Jul 06, 2020 at 23:24 UTC ( [id://11118986]=note: print w/replies, xml ) Need Help??


in reply to Re^10: Modernizing the Postmodern Language?
in thread Modernizing the Postmodern Language?

Because Raku has a better internal representation for integers than Perl's SvIV and can manage ranges lazily without reifying a large data structure. (I can't remember right now if Perl optimizes this in recent releases.)

I don't know what doing nothing a billion times in 12 or seconds has to do with my point that the semantic mismatch between a language and a target platform is difficult to manage, however.

You can port Raku to LLVM or Node or Inferno or whatever platform you want, but unless that platform can optimize grammars that require dynamic dispatch for every individual lexeme, you're going to end up with a slow Raku.

Replies are listed 'Best First'.
Re^12: Modernizing the Postmodern Language?
by syphilis (Archbishop) on Jul 07, 2020 at 03:01 UTC
    I can't remember right now if Perl optimizes this in recent releases.

    I don't think so - and same goes for raku, apparently.
    On Ubuntu-20.04 (perl-5.32.0):
    $ perl -le '$x = time;for (1..1000000000) {}; print time - $x;' 51
    On Windows7 (perl-5.32.0):
    C:\>perl -le "$x = time;for (1..1000000000) {}; print time - $x;" 13
    The Windows box is about twice as fast as the Ubuntu box, so I'm not sure why the difference in this case is a factor of 4.

    Anyway, thankfully perl has XS/Inline::C at hand to enable sane and efficient handling for cases such as these.

    Cheers,
    Rob
Re^12: Modernizing the Postmodern Language?
by bliako (Monsignor) on Jul 07, 2020 at 07:28 UTC

    A question in my mind is: can Perl's internals be re-written for more efficiency, given all this experience gained over the years in these parallel attempts? Equally important: an API to access the internals a la XS. Obviously easier, user-friendly, perhaps "isolating" the core better?

    One point of view is by salva here Re^4: Modernizing the Postmodern Language?. Is yours different? Is there hope?

      Can they be rewritten? Yes. It's just code.

      Replacing XS is a big deal though. That's a bigger break than syntax, because 30% of the CPAN won't work with new releases.

      Nicholas Clark's Ponie work shows one way where it's difficult. Artur Bergman ran an experiment around the same time to migrate SVs to something more like Parrot's PMCs, where every data type had virtual methods in well-defined slots instead of accessor macros. That didn't go very far either. These are fundamental assumptions of perl's implementation without any encapsulation beyond C macros.

      It'll be a lot of work.

      The best way I've ever figured out to do this is to introduce an abstraction layer for XS that's not XS and that lets the core gradually migrate away from the XS-ish implementation, but even that's a decades-long project I fear.

        > PMCs, where every data type had virtual methods in well-defined slots instead of accessor macros

        Well my first idea was, why not write a wrapper macro/class which applies the accessor macros in form of "virtual methods in well-defined slots"?

        > The best way I've ever figured out to do this is to introduce an abstraction layer for XS that's not XS

        Are you going into the same direction here?

        Why would we need to gradually migrate away from XS then?

        Cheers Rolf
        (addicted to the Perl Programming Language :)
        Wikisyntax for the Monastery

        UPDATE s/CS/XS/ typo

Re^12: Modernizing the Postmodern Language?
by b2gills (Novice) on Jul 14, 2020 at 22:06 UTC

    So in other words, Raku is a better designed language.

    Actually no. That is not the correct view.

    Raku is actually a designed language. Perl is an accumulation of parts that mostly work together.

    The main reason grammars are slow is because basically no one has touched the slow parts of it for the better part of a decade. We have some knowledge about how to speed it up because earlier prototypes had those optimizations.

    The thing is, that it isn't that slow. Or rather it isn't that slow considering that you get an actual parse tree.

    If you must know, the main reason it is slow is probably because it sometimes looks at particular tokens perhaps a half-dozen times instead of once. (This is the known optimization that was in one of the prototypes that I talked about.)

    It has absolutely nothing to do with being able to replace what whitespace matches. That is already fairly optimized because it is a method call, and we have optimizations which can eliminate method call overhead. Since regexes are treated as code, all of the code optimizations can apply to them as well. Including the JIT.


    Really if Perl doesn't do something drastic, in five to ten years I would suspect that Raku would just plain be faster in every aspect. (If not sooner.) The Raku object system already is faster for example. (And that is even with MoarVM having to be taught how Raku objects work every time it is started.)

    Something like splitting up the abstract syntax tree from the opcode list. That way it can get the same sort of optimizations that Raku has that makes it faster than Perl in the places where it is faster.

    Imagine if the code I posted would turn into something more like this:

    loop ( my int64 $i = 1; $i <= 1_000_000_000; ++$i ) {}
    Or rather transform that into assembly language. Which is basically what happens for Raku. (Writing that directly only reduces the runtime by a little bit more than a second.)

    It seems like every year or two we get a new feature or a redesign on a deep feature that speeds some things up by a factor of two or greater. Since Perl is more stratified than designed, it is difficult to do anything of the sort for it.


    Also I don't know why we would want to downgrade to LLVM. (Perhaps it can be made to only be a side-grade.)

    As far as I know LLVM only does compile-time optimizations. The thing is that runtime optimizations can be much much better, because they have actual example data to examine.


    Perl is an awesome language.
    Raku is an awesome language in the exact same ways, but also in a lot more ways as well.
    Many of those ways make it easier to produce faster code.

      Also I don't know why we would want to downgrade to LLVM.

      That wasn't the point of my post, but it was also exactly the point of my post, so I'm not sure why we're having a discussion on how Raku will someday eventually be faster than Perl, because that's irrelevant to my point that the semantic mismatch between a language and its implementation is really, really important to performance.

      The main reason grammars are slow is because basically no one has touched the slow parts of it for the better part of a decade.

      I remember profiling and optimizing grammars in an earlier version a little over a decade ago, so.

      It has absolutely nothing to do with being able to replace what whitespace matches.

      I don't believe this, because:

      • Like I said, I spent a lot of time looking at this.
      • Doing nothing is faster than doing something. A JIT is not magic fairy dust that makes everything faster. Even if you can get this codepath down to where you can JIT across a monomorphic call site, the resulting code is still not faster than a single inlined lexeme, especially if you account for the time and memory overhead of JITting at all. The semantic mismatch between a language and its implementation is really really important to performance.
      Really if Perl doesn't do something drastic, in five to ten years I would suspect that Raku would just plain be faster in every aspect.

      I've heard this every year for the past 10 years, but I respect that you're not promising it in the next year, like Raiph always used to. I'll believe it when I see it.

        You do realize that there exists a project which acts like a JIT for compiled code right?

        It exists because a JIT has more information available to than the compiler, it can do a better job at optimization.

        The way Raku does it is even better than that because the JIT can actually sort-of ask the compiler what it really wants. Or rather the compiler gives the JIT enough hints ahead of time.


        The reason I actually gave a timescale, instead of just saying “future”, is because of the RakuAST project which will end up cleaning a lot of semantic mismatches in the process. It should also make a lot of optimizations easier to perform.

        The plan I believe is for Rakudo to switch to it within a year. Which allows 4 to 9 years for optimizations. Again those optimizations should be easier than the ones that already made Raku faster in some cases.
        (By faster I mean faster than Perl and C/C++ for some cases.)

        MoarVM is also getting a new dispatcher that should also be easier to add optimizations to. I don't recall seeing a timescale on that though.
        (At least some of those optimizations will probably happen before it gets completely switched to.)

        So two of the slowest parts are getting replaced with much more optimizable designs.


        An optimization is just a way to push the implementation of a language as far as possible from the semantics of that language without it being noticed.
        So you were sort-of right, the semantic mismatch between a language and its implementation is really really important to performance, only you had the argument backwards.

        Of course you want as little semantic mismatches that doesn't allow for optimizations, because that is still code.

        Rakudo is made of layers where each layer only has a slight semantic mismatch from its next higher or lower neighbor.
        This allows for much larger shifts of semantics at the lowest layer without it being noticed at the top layer.

        With perl there is pretty much exactly one layer, and it is the top layer. Which means you can't really change it all that much without changing semantics and thus breaking existing code. So there is a vast sea of optimizations that are just not possible.

        Also I would really like to know how allowing you to change what is considered whitespace as a semantic mismatch. Because it really isn't.


        The semantic mismatch between what is in my head and Raku is less than the mismatch with Perl.

        That is the most important mismatch to reduce, because it is the only one that can't be optimized away.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11118986]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (5)
As of 2024-04-20 00:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found