Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

fork always 'n' processes.

by TheMagician (Initiate)
on Mar 20, 2018 at 14:16 UTC ( #1211329=perlquestion: print w/replies, xml ) Need Help??

TheMagician has asked for the wisdom of the Perl Monks concerning the following question:

Hello all, I am Paolo F. a newcomer (a.k.a TheMagician) and I am in search for wisdom. I would like to get a "$message" from this tcp server, an process (if available) always two $messages at a time. When a process finishes another one will be forked until the end of the "$messages".

### tcp server

use DBI; use IO::Socket::INET; ### flush after every write $|= 1; ### infinite loop while(1) { my($socket, $client_socket); my($peeraddress, $peerport); my($row, $data); my @processes= ('one', 'two', 'three', 'four', 'five', 'six', 'seven +', 'eight', 'nine', 'ten'); # creating object interface of IO::Socket::INET modules which intern +ally does # socket creation, binding and listening at the specified port addre +ss. $socket = new IO::Socket::INET ( LocalHost=> '127.0.0.1', LocalPort=> '5000', Proto=> 'tcp', Listen=> 5, Reuse=> 1 ) or die "ERROR in Socket creation: $!\n"; print "I am waiting clients to connect on port 5000.\n"; while($row= shift(@processes)) { $client_socket = $socket->accept(); $peeraddress = $client_socket->peerhost(); print "Sending $row to $peeraddress:$peerport)... "; # write to the newly accepted client. print $client_socket "$row\n"; # read from the newly accepted client. $data = <$client_socket>; chomp($data); print "got $data from client.\n"; $client_socket->close(); } $socket->close(); }

The client part is giving me headaches.

### tcp client

use strict; use warnings 'all'; sub sfork($&) { my($max, $code)= @_; foreach my $c (1..$max) { wait unless $c<=$max; die "Cannot fork: $!\n" unless defined(my $pid= fork); exit $code->($c) unless $pid; } 1 until -1 == wait; } sfork 2, sub { sub getFromProducer { use IO::Socket::INET; my($socket, $data); $socket= new IO::Socket::INET ( PeerHost=> '127.0.0.1', PeerPort=> '5000', Proto=> 'tcp' ) or die "ERROR in Socket creation: $!\n"; $socket->autoflush(1); $data= <$socket>; chomp($data); $socket->close(); return $data; } while(my $data= &getFromProducer) { print "($$) Got $data from producer.\n"; } }

I am not able to adapt the sfork to accomplish this task. Any help? All the best, TheMagician (Paolo F.)

Replies are listed 'Best First'.
Re: fork always 'n' processes.
by tybalt89 (Parson) on Mar 21, 2018 at 00:17 UTC

    I tweaked some things, and completely replaced the client.

    With the sleep in the client, you can see it runs $max at a time.

    #!/usr/bin/perl # http://perlmonks.org/?node_id=1211329 use strict; use warnings; if( @ARGV ) # if any argument, it's the server, otherwise the client # ( for testing purposes :) { ### tcp server use DBI; use IO::Socket::INET; ### flush after every write $|= 1; # creating object interface of IO::Socket::INET modules which inte +rnally does # socket creation, binding and listening at the specified port add +ress. my $socket = new IO::Socket::INET ( LocalHost=> '127.0.0.1', LocalPort=> '5000', Proto=> 'tcp', Listen=> 5, Reuse=> 1 ) or die "ERROR in Socket creation: $!\n"; print "I am waiting clients to connect on port 5000.\n"; ### infinite loop while(1) { my($client_socket); my($peeraddress, $peerport); my($row, $data); my @processes= ('one', 'two', 'three', 'four', 'five', 'six', 'sev +en', 'eight', 'nine', 'ten'); while($row= shift(@processes)) { $client_socket = $socket->accept(); $peeraddress = $client_socket->peerhost(); $peerport = $client_socket->peerport(); print "Sending $row to $peeraddress:$peerport)... "; # write to the newly accepted client. print $client_socket "$row\n"; # read from the newly accepted client. $data = <$client_socket> // 'nothing'; chomp($data); print "got $data from client.\n"; $client_socket->close(); } #$socket->close(); } } else { ### tcp client my $max = 2; my $active = 0; while( 1 ) { while( $active < $max ) { if( my $pid = fork ) # parent { $active++; } elsif( defined $pid ) # child { my($socket, $data); $socket= new IO::Socket::INET ( PeerHost=> '127.0.0.1', PeerPort=> '5000', Proto=> 'tcp' ) or die "ERROR in Socket creation: $!\n"; $socket->autoflush(1); $data= <$socket>; chomp($data); $socket->close(); print "($$) Got $data from producer.\n"; sleep 1; exit; } else { die "fork failed $!"; } } if( $active == $max ) { wait; $active--; } } }

    Note that the server was trying to read back from the client who never wrote anything ???

      Thank you tybalt89, yes you were right... the code it is far from being working. TheMagician. PS. to all the wisdom keepers: This is a Proof of Concept; what I would like to demonstrate is the scalability you can get from having 1 producer (server) and more consumers (clients) that are taking "$messages" from the same tcp source. And what about if you can control that the number of processes running at the same time? These are the data I am going to collect in order to validate the POC effectiveness.
Re: fork always 'n' processes.
by TheMagician (Initiate) on Mar 21, 2018 at 16:40 UTC
    Hello everybody, I ended up, thanks to tybalt89, with this (client):

    ### tcp client

    #!/usr/bin/env perl use strict; use warnings 'all'; $|= 1; my $n= shift || 2; my $active= 0; sub getFromProducer { use IO::Socket::INET; my($socket, $data); $socket= new IO::Socket::INET ( PeerHost=> '127.0.0.1', PeerPort=> '5000', Proto=> 'tcp' ) or die "ERROR in Socket creation: $!\n"; $socket->autoflush(1); $data= <$socket>; chomp($data); print $socket "received\n"; $socket->close(); return $data; } print "Client started.\n"; while(my $data= &getFromProducer) { if($active<$n) { if(my $pid= fork) { $active++; } elsif(defined($pid)) { print ". Got $data.\n"; select(undef, undef, undef, int(rand()*10)+1); exit } else { die "Cannot fork: $!\n"; } } if($active==$n) { wait; $active--; } } print "Client finished.\n"; # vim: set nohls nowrap ts=2 sw=2 sts=2 et ft=perl:

    TheMagician.

    PS. I have divided the client from the server, because in my PoC the clients are placed into a different host than the server.

Re: fork always 'n' processes.
by Anonymous Monk on Mar 20, 2018 at 15:25 UTC
    I do not readily see the advantage of using forks especially on the client side and especially since the server is handling requests one at a time. The client is always going to be stuck in a rather long I/O wait for the server to respond, and you simply need to throttle how many requests the client sends out at one time. (Each time a reply arrives, it can simply check a counter and submit a new request.)
      Thanks for answering. 1. I have a queue of tasks, a socket is giving me; 2. I would like to process these tasks 'n' processes at a time; 3. Always 'n' processes should be active and running; 4. Until the end of the queue has been reached. I am thinking of a 'scalable' way and use more clients to process the tasks queue. TheMagician

        Have you looked at Parallel::ForkManager?

        Edit: changed title because "always 'n' processes" reads too much like "Guns 'n' Roses" or "fish 'n' chips" to me.

        You should elaborate on your requirements. Is this intended to be portable? Windows/linux? Do you need it as perl code? There are utilities that can help in running parallel tasks (e.g. xargs). Do you need those jobs as separate processes; perhaps a threaded version might suit you as well?

        Have you searched for modules (e.g. Parallel::ForkManager)? Do you actually need a robust, efficient solution or is this homework?

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1211329]
Approved by marto
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others studying the Monastery: (4)
As of 2019-12-15 07:13 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?