Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

keep a module in shared memory

by dbw (Beadle)
on Sep 11, 2009 at 15:28 UTC ( #794797=perlquestion: print w/ replies, xml ) Need Help??
dbw has asked for the wisdom of the Perl Monks concerning the following question:

I have a script that 'use's some pretty large modules. The script is triggered by an incoming email, and they come in fast, so there are often a number of copies of the script running in parallel. Is there any way to load the modules into shared memory, so that each instance of the script does not need to allocate memory to load the modules?

/usr/bin/perl '-nemap$.%$_||redo,2..$.++;print$.--'

Comment on keep a module in shared memory
Re: keep a module in shared memory
by Fletch (Chancellor) on Sep 11, 2009 at 15:45 UTC

    Not as such that comes to mind, but you might look into PersistentPerl, which transparently handles keeping the backend perl process going while subsequent invocations talk to that existing instance (think FastCGI, but not tied to a CGI / web server request context).

    Another alternative would be to use something like SOAP::Lite or another RPC mechanism and have a persistent backend server which answers requests from thin clients which just marshal requests into SOAP calls and return the result.

    The cake is a lie.
    The cake is a lie.
    The cake is a lie.

Re: keep a module in shared memory
by ikegami (Pope) on Sep 11, 2009 at 15:52 UTC
    No. You'll need to share a Perl process. Fletch provided means of doing this.
Re: keep a module in shared memory
by tmaly (Monk) on Sep 11, 2009 at 16:27 UTC

    You could create a server with POE that runs and will accept asynchronous requests. This will allow you to load the modules once and then just handle the requests.

    The Cookbook has some good examples

Re: keep a module in shared memory
by cdarke (Prior) on Sep 11, 2009 at 17:15 UTC
    If you are using modules with a large XS component then those parts will be shared anyway. On Windows a DLL gets created, and on UNIX a .so (shared object) file is used. These will share code and read-only segments between processes, but not data segments which are written to, such as the heap.
Re: keep a module in shared memory
by CountZero (Bishop) on Sep 11, 2009 at 18:19 UTC
    Questions you should ask yourself: "are my programs running out of memory?" and "is my application unacceptably slow in starting up?"

    If the answer to those questions is "no" then changing to a client-server solution may just be an unnecessary optimization, likely to increase the development and maintenance cost of your application for no good reason.

    As the saying goes: The better is the enemy of the good.

    CountZero

    A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James

Re: keep a module in shared memory
by Sewi (Friar) on Sep 11, 2009 at 19:51 UTC
    Look at Net::SMTP::Server, build a wrapper around your program and (for security reasons) let your local mailserver deliver the mail to your programm on a non-standard port via 127.0.0.1

    For SOAP: Read the beginning of the description of SOAP::Simple, it perfectly describes SOAP.

Re: keep a module in shared memory
by afoken (Parson) on Sep 13, 2009 at 10:36 UTC

    Start a small process for each email that just puts the incoming email into a queue. Have second, large process that processes the queue step by step. Look at the docs of your Mail Transport Agent, it may already have a usable queueing mechanism and a hook to process the queue elements.

    The MTA hook may start a process for each queue element. Instead of using a fat perl script there, you could again resort to a small process that connects to a permanently running, fat server process.

    For IPC, you could use Unix sockets, named pipes, raw TCP sockets, or HTTP, for example. Raw TCP or Unix sockets are easy to implement, about 100 lines C-Code, resulting in a fast and small executable.

    I prefer using daemontools to manage servers, because daemontools take care of turining a simple and stupid script into a fully-featured daemon, complete with reliable logging, restarting, managing and so on.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://794797]
Approved by Corion
Front-paged by tye
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others wandering the Monastery: (11)
As of 2014-10-22 09:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    For retirement, I am banking on:










    Results (114 votes), past polls