zentara has asked for the wisdom of the Perl Monks concerning the following question:

Hi, I know about PersistentPerl, but I was thinking about more straightforward ways of speeding up Perl loading. Ram is cheap now, so why not use it? I tried to copy my /usr/bin/perl binary to /dev/shm/perl, and changed the shebang line to #!/dev/shm/perl. Things ran OK, but I can't tell if it made any speed difference at all. So what do the experts think about this? Would you need to put all the libs in /dev/shm too? Would it be a useful technique, or is there some downside?

Replies are listed 'Best First'.
•Re: Can shmfs speed up Perl?
by merlyn (Sage) on Apr 25, 2003 at 14:56 UTC
    If your OS is up to snuff, any frequently accessed disk page is in memory already anyway. I don't think you should work harder, unless you're memory starved and its getting out of memory too often. But that didn't sound like your problem.

    -- Randal L. Schwartz, Perl hacker
    Be sure to read my standard disclaimer if this is a reply.

Re: Can shmfs speed up Perl?
by tachyon (Chancellor) on Apr 25, 2003 at 15:24 UTC

    You are making all sorts of speed assumtions. Cheifly your suppostion is that Perl is 'slow' because of the time taken to load the Perl binary into memory. You are completely off track in all liklihood. For a start it is probably cached on any decent OS. Disk I/O is the most common bottleneck. Crap algorithms rate pretty high as well. If it is a CGI mod_perl will probably be at least an order of magnutude faster, maybe more. If you have a specific problem you may find we can offer a specific answer. For example I cut a couple of orders of magnitude off a long running task here Re: Performance Question which was a pure disk I/O bottleneck.




Re: Can shmfs speed up Perl?
by Notromda (Pilgrim) on Apr 25, 2003 at 15:19 UTC
    The "slow" startup is related to the fact that perl has to compile your script into a parse tree or "bytecode" (Yes, fellow monks, I read the recent discussions about the B:: modules). If you are running on a command line, I don't think there's much you can do about it.

    OTOH, if you are running in a web environment, mod_perl can greatly speed up the load time. If you are calling a perl script over and over, you could rethink the way you are doing things and make the perl program a continuous program, and feed the data in via named pipe or something.

    Is there a specific problem you are running into?

      "parse tree" ne "bytecode".

      Definitely not the same thing. (Not too surprising, as they're essentially mutually exclusive)

        Right, that's why I put that disclaimer in there. People have been calling the parse tree "bytecode" because of the B::* modules. But as has been said elsewhere, the B doesn't stand for bytecode. :P

        Which reminds me, I really should play with those a little bit.

Re: Can shmfs speed up Perl?
by Abigail-II (Bishop) on Apr 25, 2003 at 15:08 UTC
    Uhm, what made you think that the "slowness" (is it?) of Perl loading was IO bound?