Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer

Re: Re: mod_perl and shared environments don't mix - do they?

by bean (Monk)
on Aug 08, 2003 at 06:00 UTC ( #282118=note: print w/replies, xml ) Need Help??

in reply to Re: mod_perl and shared environments don't mix - do they?
in thread mod_perl and shared environments don't mix - do they?

Actually, you can't use all available memory with PHP - the memory is limited to a setting in php.ini (8Mb default, I think). As for consuming all CPU, the execution time is also limited to a set amount of time (30 seconds default). Leaking memory is another fun thing you can do with mod_perl but not with PHP - or that used to be the big complaint, years ago. Plus, IIRC apache+php is a more lightweight binary than apache+mod_perl - taking less memory means you can handle more connections before swapping out. And since there are fewer useful modules/classes to use, PHP code tends to be leaner and meaner, so it usually executes more quickly. PHP partially makes up for this lack by having a(n inconsistently named) function for absolutely everything (this is also a weakness).

Ok, I admit it - I use PHP (not by choice - it's the company standard at my work). I feel so *dirty* - I wash and I wash, but I can't get my namespace clean...

Obviously mod_perl is much more powerful - but with great power comes great responsibility - and that's in short supply.
  • Comment on Re: Re: mod_perl and shared environments don't mix - do they?

Replies are listed 'Best First'.
Re: Re: Re: mod_perl and shared environments don't mix - do they?
by perrin (Chancellor) on Aug 08, 2003 at 14:50 UTC
    Limiting the amount of memory and CPU that mod_perl uses is also quite easy (using rlimits as in Apache::Resource).

    I think you are misinformed about leaking memory. Most of the things people consider leaking memory under mod_perl are simply using memory as your program needs it to do more work. PHP will use memory too, if you load a big data structure from a file or database. Leaking requires you to code a circular structure or use a module with bad XS code. Running mod_perl by itself is quite stable.

    PHP is not as fast as mod_perl. Numerous benchmarks have shown this, including the ones Yahoo did to decide on using PHP.

    The subject of this thread was security, and why PHP is considered secure in a shared environment. The answers seem to be that it can be run as CGI (which of course is true of Perl as well) or that it can be run in a safe shared mode. The latter is an advantage over mod_perl for ISPs. At the moment, people looking for a cheap mod_perl ISP have to go for either a virtual server environment (where you get your own server with root access, but not a dedicated box) or SpeedyCGI.

      I thought the subject of the thread was why mod_perl doesn't work in a shared environment (maybe I should read more than the headlines...)

      Anyways, if you use a perl module to limit the memory and CPU usage of a mod_perl script, it would seem that in a shared environment you're letting the fox guard the henhouse - unless Apache::Resource prevents the children from accessing the same functionality.

      I may well be wrong about mod_perl leaking memory - that tidbit is based on hearsay from 4-5 years ago. However, it made sense to me since you can load C binaries in a Perl module. I guess you could run a binary from php using a system call, but the sysadmin can disable functions and the changing of environment variables from the php.ini, making this easy to stop.

      As far as speed goes, mod_perl may well be faster than the equivalent php code, but that wasn't the point - the point was that it's more likely that the php code will be leaner, since the temptation to load 20 modules isn't there (there aren't 20 modules worth loading). So the php code tends to be written precisely to the purpose at hand, and may easily be half as complex, giving it a speed advantage.

      The long and the short of it is, system administrators don't want you to have access to apache internals - it makes them nervous.
        It's true, Apache::Resource would allow malicious users to remove limits. It is meant to help people keep from shooting themselves in the foot, not to prevent intentional attacks. It might work better to set rlimits as root during apache startup.

        it made sense to me since you can load C binaries in a Perl module

        The extensions to PHP that allow things like database access are also written in C. As I understand it, writing custom PHP libraries that use C code is a very popular practice among high-volume sites.

        the php code tends to be written precisely to the purpose at hand, and may easily be half as complex, giving it a speed advantage

        Coding yourself vs. using a CPAN module is only faster if you can always write faster code than the people who write the modules. Given the high-quality of the most common Perl modules (DBI for example), this seems unlikely. There certainly are some large "kitchen-sink" CPAN modules out there, but anyone who is concerned about performance can easilly steer clear of them.

        I don't think that mod_perl's API for accessing apache data is inherently more dangerous than the PHP equivalent. They both give you access to the same information about the request. What is more dangerous is that mod_perl doesn't provide a safe sandbox where code written by one user can't possibly affect code written by another. This is addressed in mod_perl 2, but it's not stable yet. Meanwhile, people looking for shared hosting with mod_perl still need to go with a virtual server or SpeedyCGI.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://282118]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (2)
As of 2017-08-23 01:30 GMT
Find Nodes?
    Voting Booth?
    Who is your favorite scientist and why?

    Results (344 votes). Check out past polls.