Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re: How to Fix CGI

by perrin (Chancellor)
on Nov 07, 2007 at 23:54 UTC ( [id://649597]=note: print w/replies, xml ) Need Help??


in reply to How to Fix CGI

I think that what he's missing here is that many ISPs run PHP via CGI, not via mod_php. They do this for security. In fact, PHP is not comparable in speed to mod_perl at all unless you use a code cache, which makes it stateful by keeping the compiled code in memory.

Replies are listed 'Best First'.
Re^2: How to Fix CGI
by clinton (Priest) on Nov 08, 2007 at 13:32 UTC
    agreed.

    ... which takes us back to the problems of using mod_perl in a shared hosting environment - any script can affect the environment of other users, and globals can persist from request to request. I don't know if similar problems exist in PHP.

    What would be an ideal starter environment is to be able to:

    • load Perl at startup
    • provide completely separate perl environments for different users
    • load all the required modules at startup
    • serve each request with a clean "just started" stack, so that previous requests have no impact.

    I know very little about the internals, but what about:

    • starting a single Perl interpreter ("root process")
    • forking a separate process for each customer / website - this acts as the "parent process" for each website
    • load required modules into this new "parent process"
    • for each request, fork a Perl interpreter from the "parent process"
    I may be barking up the wrong tree, and I don't know how heavy these forks are (whether they would be a real gain on using straight CGI), but it may be worth a shot.

    Clint

      What you describe above is already possible with mod_perl. Just set MaxRequestsPerChild to 1, and each process will exit after a single request and cause another one to be forked.

      It's better than CGI, but it still sucks compared to really using mod_perl. It means you can't have persistent database connections, cached database statement handles or data, and similar performance tweaks that are only possible in a persistent environment.

        If this works reasonably well, why hasn't it caught on?
        ISP and shared webhosts cannot provide mod_perl simply because: with mod_perl, users can do deadly things by specifying Perl* directives in their own .htaccess files. There is currently no way to forbid this. There is in mod_ruby. Patch for this has been submitted (maybe more than once), but rejected. I don't think mod_perl developers care enough about shared hosting environments.
Re^2: How to Fix CGI
by Anonymous Monk on Nov 08, 2007 at 21:23 UTC
    Code caching does not make mod_php stateful. It's equivalent to keeping .pmc files in memory. Nothing more.
      I admit to not being a PHP expert, but people who are have told me that when you use a code cache with PHP you hit similar scoping issues to the mod_perl ones. Otherwise, there would be no persistent database connections in PHP either.
        PHP persistent database connections are handled by each database module itself and have nothing to do with code caching.

        I believe that your informant is quite wrong about PHP scoping problems with code caching. Perhaps the person meant something related to stupid old bugs like this: Bug #5875 APC removes $_SERVER from global scope. Note the

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://649597]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others exploiting the Monastery: (7)
As of 2024-04-23 07:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found