Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Child Process as a Function

by pgduke65 (Acolyte)
on Sep 01, 2014 at 01:36 UTC ( [id://1099138]=perlquestion: print w/replies, xml ) Need Help??

pgduke65 has asked for the wisdom of the Perl Monks concerning the following question:

Monks,

I am looking for guidance regarding a solution to a requirement that I have to run through data and process it in chunks. X number at a time.

In korn shell, I would release a parenthetical group to the background and have the main program wait for the child processes to complete.

My initial research has led me to fork/exec. However, all the examples I have seen use a script for the child process. Is it possible to have a function reference execute as the child thread?

I am also looking for a solution that will run on Windows and Unix. I am assuming that this is possible in PERL?

Thank you.

Replies are listed 'Best First'.
Re: Child Process as a Function
by kennethk (Abbot) on Sep 01, 2014 at 02:13 UTC
    I'm not sure which references you've been looking at, but it's possible to just fork, and then the child process does its thing. You still need to do IPC to get any data returned, which usually means pipes. See perlipc.

    As an alternative, you could use threads instead, which sounds closer to what you really want to run. Both solutions will generally run just fine under *nix and Windows environments, though the fork emulation that has to happen for Windows can lead to some peculiar behavior if you try to get clever.


    #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

Re: Child Process as a Function
by Anonymous Monk on Sep 01, 2014 at 03:17 UTC
Re: Child Process as a Function
by Anonymous Monk on Sep 01, 2014 at 11:07 UTC
    Do it the same way that you did it in Korn. "Having to run through data and process it in chunks" does not mean nor does it imply that it would be beneficial, much less "faster," to process those chunks in parallel. Just write a simple Perl script that takes two input parameters indicating which slice of the data (start and end) you want to process "this time." Run that simple Perl command one at a time, or use "&" in Unix/Linux shell to launch multiple jobs =if= you can plainly see that two jobs finish in-parallel in noticeably less than twice the time. Since most data processing is I/O-constrained, multithreading has limited use and should not be reached-for instinctively.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://1099138]
Approved by Old_Gray_Bear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (4)
As of 2024-04-19 00:51 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found