|more useful options|
I was sure I'd read that before somewhere. So I went off on a scavenge hunt of code and PM threads and my reams of random thoughts and jottings trying to remember where and when. That eventually lead me through some of my experiments with Erlang and then back to Re^18: Why is the execution order of subexpressions undefined?, and indeed that whole (nauseatingly frustrating) thread Why is the execution order of subexpressions undefined?.
The whole motivation behind that thread was an attempt to make the case (that I still firmly believe to be true), that if Perl(6) had a defined execution order, it would greatly increase the potential for fine grained, interpreter induced, parallel execution. Even down to parallelising the clauses within an individual statement or line of code.
Having tracked down that Erlang reference I posted, and followed a few of the links from it, although I hadn't tracked my way back to your reference, I was all ready (had started typing:), to claim to have seen it back then (circa. April 2004).
It was only then that I noticed the date! (less than a month ago.)
So, maybe I saw something which the author also saw that inspired his blog entry, or maybe it was the simply the similarity between its title and Insight needed: Perl, C, Gtk2, Audio, Threads, IPC & Performance - Oh My! that has sat in my Personal Nodelet for several months that triggered the feelings of deja vue. Either way, thanks for the link, it was an interesting read.
On the subject of continations and the continuation passing style. That is another of my fears regarding Parrot. That blog entry suggests that the costs of CPS is a single stack frame and very low, light and fast.
However, other stuff I read when trying to get to grips with the meaning of CPS and the implications of its use within Parrot when combined with Parrots
All this combined to give me the impression that Parrot is going generate a huge runtime stack requirement. Albeit that the stack will be implemented as some kind of linked list in the heap. This impression was somewhat confirmed when someone (possibly Dan?) posted a description of what happened when they tried to run a fairly large, computer generated PAR program.
Unfortunately, I never did understand enough of the Parrot source code, much of which was still temporary and was changing, evolving and being re-written in huge part on a daily and weekly basis back then, for me to properly make the transition from unedrstanding to being able to confirm and formally describe those fears.
The impression remains, but things have moved on too far for me to hope to catch up now. It was another of those gut feelings that lead to my agreement with tilly elsewhere, that Parrot was unlikely to produce something that would run Perl6 efficiently.
Maybe in the pre-compiling world of GHC, with its huge depth of code analysis and very advanced compile time optimisations, the costs of using CPS snapshots to implement exception mechanisms mean that large swaths of intermediate snapshots between points in the HLL code that raise exceptions and points in the code that catch those exceptions can be optimised away at compile time so reducing the peek stackframe snapshot costs?
Maybe, I simply misunderstand the implications of what needs to be stored when you combine exceptions, closures, nested lexical stashes, nested namespaces and CPS?
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.