Hmm, I'm doubtful. AFAICS the assumption he makes is that simply having a persistent perl interpreter in memory, which loads scripts and modules at runtime, will save a lot of time and processing power. I don't think that assumption is correct. Try this in your shell:
sh-3.1$ time perl -e ''
sh-3.1$ time perl -MCGI::Simple -e ''
Loading the perl interpreter, compiling and executing the noop takes 0.005 seconds, whereas doing the same and loading the CGI::Simple module takes more than five times as long (you can try this with CGI as well, but that's a little unkind ;-). His proposition would only manage to save a small part of the 0.005 seconds, namely the time it takes the OS to fork and start a new process and load the perl interpreter into memory. So I may have some flaw in my thinking here (in which case I'd be thankful if someone more knowledgeable than me could point it out), but I just don't see the point of his mod_perllite.
The key problems we want to work around are user and web host aversion to CGI and intense complexity and persistence of mod_perl. Imagine if this is all you needed to add to Apache before you could throw any file with a .pl extension into your web root and that's it, you're off and running Perl:
LoadModule perlite_module modules/mod_perlite.so
AddType application/x-httpd-perlite .pl
This we can sell shared hosting providers on. mod_perl we unfortunately cannot.
I think that what he's missing here is that many ISPs run PHP via CGI, not via mod_php. They do this for security. In fact, PHP is not comparable in speed to mod_perl at all unless you use a code cache, which makes it stateful by keeping the compiled code in memory.
... which takes us back to the problems of using mod_perl in a shared hosting environment - any script can affect the environment of other users, and globals can persist from request to request. I don't know if similar problems exist in PHP.
What would be an ideal starter environment is to be able to:
load Perl at startup
provide completely separate perl environments for different users
load all the required modules at startup
serve each request with a clean "just started" stack, so that previous requests have no impact.
I know very little about the internals, but what about:
starting a single Perl interpreter ("root process")
forking a separate process for each customer / website - this acts as the "parent process" for each website
load required modules into this new "parent process"
for each request, fork a Perl interpreter from the "parent process"
I may be barking up the wrong tree, and I don't know how heavy these forks are (whether they would be a real gain on using straight CGI), but it may be worth a shot.
What you describe above is already possible with mod_perl. Just set MaxRequestsPerChild to 1, and each process will exit after a single request and cause another one to be forked.
It's better than CGI, but it still sucks compared to really using mod_perl. It means you can't have persistent database connections, cached database statement handles or data, and similar performance tweaks that are only possible in a persistent environment.
I admit to not being a PHP expert, but people who are have told me that when you use a code cache with PHP you hit similar scoping issues to the mod_perl ones. Otherwise, there would be no persistent database connections in PHP either.