Clear questions and runnable code
get the best and fastest answer
You can save the OP from many blind alleys.
I don't think he's listening. If this stuff were easy, it would be more than a pipe dream by now.
I hope no one's taking my criticism as stop energy. My intent is to encourage people to solve the real problems and not pin their hopes on quick and dirty stopgaps that probably won't work.
Are you 100% certain that under those circumstances, that the link-time optimiser isn't going to find substantial gains from its supra compile-unit view of that code?
I expect it will find some gains, but keep in mind two things. First, you have to keep around all of the IR for everything you want to optimize across. That includes things like the XS in DBI as well as the Perl 5 core. Second, LLVM tends to expect the languages it compiles have static type systems. The last time I looked at its JIT, it didn't do any sort of tracing at runtime, so either you add that yourself, or you do without. (I stand by the assumption that the best opportunity for optimization from a JIT is rewriting and replacing basic blocks with straight line code that takes advantage of known types.)
With that said, compiling all of Perl 5 with a compiler that knows how to do link time optimization does offer a benefit, even if you can use LTO only on the core itself. This won't be an order of magnitude improvement. If you get 5% performance improvement, be happy.
Defining a language that targets a VM defined in (back then) lightweight C pre-processor macros, and throwing it at the C compilers to optimise, was very innovative.
Maybe so as far as that goes, but the implementation of Perl 5 was, from the start, flawed. Even something as obvious as keeping the reference count in the SV itself has huge problems. See, for example, the way memory pages go unshared really really fast even when reading values between COW processes.
The problem is that the many heavy-handed additions, extensions and overzealous "correctness" drives, have turned those once lightweight opcode macros into huge, heavyweight, scope-layered, condition-ridden lumps of unoptimisible boiler-plate.
I think we're talking about different things. Macros or functions are irrelevant to my criticisms of the Perl 5 core design. My biggest objection is the fat opcode design that puts the responsibility for accessing values from SVs in the opcode bodies rather than using some sort of polymorphism (and it doesn't have to be OO!) in the SVs themselves.
Are continuations a necessary part of a Perl-targeted VM?
They were a must-have from Perl 6 back then. They simplify a lot of user-visible language constructs, and they make things like resumable exceptions possible. If implemented well, you can get a lot of great features reasonably cheaply from CPS as your control flow mechanism.
Lua uses them, and Lua uses a register architecture.
Why have RISC architectures failed to take over the world?
Windows, I suspect.