Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much
 
PerlMonks  

Fork multi processes

by saurya1979 (Initiate)
on May 04, 2012 at 06:46 UTC ( #968874=perlquestion: print w/ replies, xml ) Need Help??
saurya1979 has asked for the wisdom of the Perl Monks concerning the following question:

Hi, I have a scenario where in there are 10 destinations to rsync. I want to run 5 processes at a time to perform rsync for 5 destinations(I can do it with fork) BUT I want to run the 6th process for 6th destination as soon as 1 of the 5 running processes is complete. I don't want to wait for all the 5 processes to complete and then start the next chunk. For every 1 process completes out of 5, another process should be fired. How can I achieve it using Perl? Thanks Saurya

Comment on Fork multi processes
Re: Fork multi processes
by salva (Monsignor) on May 04, 2012 at 07:45 UTC
    use Net::OpenSSH::Parallel:
    my $p = Net::OpenSSH::Parallel->new(workers => 5); for my $host (@hosts) { $p->add_host($host); } $p->all(rsync_put => $src, $dst); $p->run;
      Thank you Salva. Is it possible to not use any external Perl module and use build-in modules to achieve it?
        In my case, there is just 1 source and 1 destination server for rsync. I want to rsync directories located at different place on source server to the destination server. So I wanted to run multiple rsyncs to make it faster.
        Yes, sure, you can achieve it, though it would take you a while to do it.
        The entire point of Perl is that you do not have to. There are no points to be earned for laboriously doing what has already been done (better) by someone else, such that all you need to do to solve your problem is to install something and then write five or ten additional lines in order to do it.
Re: Fork multi processes
by BrowserUk (Pope) on May 04, 2012 at 09:37 UTC

    A simple threaded solution. Substitute your rsync commands for the sleep.pl:

    #! perl -slw use strict; use threads; for ( 1 .. 10 ) { async{ my $secs = rand 10; system "sleep.pl $secs"; }; sleep 1 while threads->list( threads::running ) > 5; $_->join for threads->list( threads::joinable ); } sleep 1 while threads->list( threads::running ); $_->join for threads->list( threads::joinable );

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    The start of some sanity?

Re: Fork multi processes
by DrHyde (Prior) on May 04, 2012 at 09:49 UTC
    Use Parallel::ForkManager. There's an example here. I see that in a reply you said you wanted to do it without using any non-core modules. That's stupid. But if you want to perpetrate stupidity (perhaps because your boss is an arse-hat) then I suggest that you take Parallel::ForkManager and simply put a copy of it in your code.
Re: Fork multi processes
by zentara (Archbishop) on May 04, 2012 at 10:19 UTC
    Here are a couple more non-module-needed code examples, which may work for you.
    #!/usr/bin/perl #by Abigail of perlmonks.org #Some times you have a need to fork of several children, but you want +to #limit the maximum number of children that are alive at one time. Here #are two little subroutines that might help you, mfork and afork. They + are very similar. #They take three arguments, #and differ in the first argument. For mfork, the first #argument is a number, indicating how many children should be forked. +For #afork, the first argument is an array - a child will be #forked for each array element. The second argument indicates the maxi +mum #number of children that may be alive at one time. The third argument +is a #code reference; this is the code that will be executed by the child. +One #argument will be given to this code fragment; for mfork it will be an + increasing number, #starting at one. Each next child gets the next number. For afork, the + array element is #passed. Note that this code will assume no other children will be spa +wned, #and that $SIG {CHLD} hasn't been set to IGNORE. mfork (10,10,\&hello); sub hello{print "hello world\n";} print "all done now\n"; ################################################### sub mfork ($$&) { my ($count, $max, $code) = @_; foreach my $c (1 .. $count) { wait unless $c <= $max; die "Fork failed: $!\n" unless defined (my $pid = fork); exit $code -> ($c) unless $pid; } 1 until -1 == wait; } ################################################## sub afork (\@$&) { my ($data, $max, $code) = @_; my $c = 0; foreach my $data (@$data) { wait unless ++ $c <= $max; die "Fork failed: $!\n" unless defined (my $pid = fork); exit $code -> ($data) unless $pid; } 1 until -1 == wait; } #####################################################
    and another example
    #!/usr/bin/perl #by merlyn use POSIX ":sys_wait_h"; my @tasks = (1..398); my %kids; { while (@tasks and keys %kids < 5) { $kids{fork_a_task(shift @tasks)} = "active"; } { my $pid = waitpid(-1, 0); if ($pid == -1) { %kids = (); } else { delete $kids{$pid}; } } redo if @tasks or %kids; } sub fork_a_task { my $i = shift; my $pid = fork; return $pid if $pid; unless (defined $pid) { warn "cannot fork: $!"; return 0; } ## do stuff for task $i goes here... print "Doing $i\n"; exit 0; }

    I'm not really a human, but I play one on earth.
    Old Perl Programmer Haiku ................... flash japh
Re: Fork multi processes
by JavaFan (Canon) on May 04, 2012 at 11:51 UTC
    How can I achieve it using Perl?
    The same as in C. In a nutshell: fork, fork, fork, fork, fork, wait, fork. That is, you fork of 5 children, then wait for any of them to finish, after which you fork the 6th.

    Of course, you shouldn't ignore SIGCHLD, and you may need to do some more bookkeeping if you do other forks as well.

Re: Fork multi processes
by sundialsvc4 (Abbot) on May 04, 2012 at 13:21 UTC

    Without trying to be too disparaging or insulting on the matter, the simple fact remains that this requirement definitely is something that has already been done.   For instance, the git version-control system quite routinely launches parallel processes or threads to do this sort of thing.   Therefore, we can quite reliably say that the necessary code to do this has been done before; as, in fact, we see that it has.

    This I learned from a cereal box:  

    Actum Ne Agas:   Do Not Do A Thing Already Done.™”

    Always start, and usually end, your quest at http://search.cpan.org.   You’ll find everything but the Bundle:::InterchangeKitchenSink in there if you look around long enough.   No matter what you are doing, you are emphatically not the first one to have done it by now, and you can and should always seek to leverage that fact to your fullest advantage.   Perl CPAN is especially rich in opportunities to do that.

    So, in that respect, “it’s not ‘exactly like C,’” and this is a difference that makes all the difference in the world.   I submit (and I trust you can reasonably guess the intended extent of my point) that we do not embrace this language system because of what it enables us to write, but rather, for what it enables us to avoid writing.   Pre-hung doors and prefabricated windows; fully assembled kitchen appliances; furnished apartments with well-stocked wine cabinets.

      Programmers that need to run to CPAN, or at first impulse turn to CPAN at even the most trivial task are little more than script kiddies without much understanding, and, when being interviewed for a job, will be filtered out and discarded at the first opportunity.

      Typing in 'fork' and 'wait' hardly takes more effort than searching on CPAN, downloading it, reading the documentation, and then typing in the necessary code so the module can do the trivial task for you. And going the CPAN route robs you from the opportunity to learn some basic coding skills.

      Do Not Do A Thing Already Done.
      So, if today I use a module that helps me spawn 6 children, but never more than 5 at a time, and tomorrow I have another program that needs to spawn 6 children, but never more than 5, I should not use this module, cause that means doing a thing already done?

      Can I at least use use strict; more than once?

        You better use strict; at least once per child or you'll wind up with a bunch of young thugs ;^)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://968874]
Front-paged by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (4)
As of 2014-09-22 06:50 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    How do you remember the number of days in each month?











    Results (182 votes), past polls