Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine

Fork issues

by 3SRT (Novice)
on Feb 12, 2008 at 21:28 UTC ( #667683=perlquestion: print w/replies, xml ) Need Help??
3SRT has asked for the wisdom of the Perl Monks concerning the following question:

Hello fellow monks,

I created a simple 'server' script that will accept input from a cgi script. The server will handle multiple connections on the same port, also. In a nut shell, this 'server' script will take in variables, and fork off a process to sleep for a given amount of time, and then send the user an email (and perform other functions).

Here is my problem: I receive a confirmation e-mail once the connection has been made, but i do not receive the other e-mail (which is supposed to be sent from the forked process once the sleep() is up). I receive one confirmation e-mail per client, but I will never receive the second e-mail. I don't know why the forked process is not working as I think it should?

Also, the problem ONLY happens when I daemonize the process. If i simply run the script inside a terminal 'perl', it will work perfectly fine and I will get all the desired results. However, when I daemonize it by 'perl &', the functions within the fork, including the second e-mail, will not be executed.

Shouldn't I be able to daemonize it and get the same results?

Here is the code:

#!/usr/bin/perl # includes and dependencies.... use strict; use warnings; my $max_clients = 10; my $port = 15100; while(1) { my $sock = new IO::Socket::INET ( LocalHost => 'localhost', LocalPort => $port, Proto => 'tcp', Listen => $max_clients, Reuse => 1, ); die "Could not create socket: $!\n" unless $sock; my $new_sock = $sock->accept(); while(<$new_sock>) { # parse information close($sock); #send e-mail function here unless(fork) { # sleep for given time sleep(30); # perform function #2nd send e-mail function here exit(0); } } }

Thanks in advance!

Replies are listed 'Best First'.
Re: Fork issues
by almut (Canon) on Feb 12, 2008 at 22:21 UTC
    ...when I daemonize it by 'perl &', the functions within the fork, including the second e-mail, will not be executed.

    Do the "other functions" by any chance write anything to stdout/stderr? In this case, the shell would stop the backgrounded process (until you foreground it again (using fg, normally — in which case you should then see the pending output))...  Just a thought.

Re: Fork issues
by superfrink (Curate) on Feb 13, 2008 at 06:10 UTC
    Have a look at the Net::Server modules. They take care of the network and forking stuff for you. You only have to write a subroutine that the child will run.

    I ran the posted code but replaced the e-mail function here comments with print "A\n" and print "B\n". I found both letters were printed to the same terminal the program was run from just as expected. This was the case both with and without the &. I am running Fedora 6. Your OS might behave differently.

    Something else I noticed is the parent process in that code does not wait for the child to exit. The result is your system will accumulate zombie processes. Another option is before any forking code include $SIG{'CHLD'}="IGNORE";

    Also running a process in the background (with the &) is not quite the same as daemonizing a process. See Unix Programming FAQ 1.7 : How do I get my program to act like a daemon? for a description of daemonizing.
Re: Fork issues
by pc88mxer (Vicar) on Feb 13, 2008 at 04:39 UTC
    Just a style comment...

    Usually the way accept and fork are used together is to have the parent keep the accept socket open and fork off children to handle incoming requests:

    while (1) { my $new_handle = $sock->accept; my $pid = fork; if (defined($pid) && $pid == 0) { ...child executes here... } }

    This way there is no need to close the accept socket and re-create it again. In fact, doing so will drop all connections that are queued up on the accept socket. Moreover, this allows the parent to fork off another child even if the first child hasn't completed yet, i.e. you get concurrency.

Re: Fork issues
by starbolin (Hermit) on Feb 13, 2008 at 00:56 UTC

    Let me start out by saying that I could not duplicate your exact problem. I think though that your problem lies in over-simple handling of the return values from fork or new and of not calling wait() on the children. At first, I could not get you code to fork at all. Then, after adding proper checks, the code runs fine either in terminal or in background.

    s//----->\t/;$~="JAPH";s//\r<$~~/;{s|~$~-|-~$~|||s |-$~~|$~~-|||s,<$~~,<~$~,,s,~$~>,$~~>,, $|=1,select$,,$,,$,,1e-1;print;redo}

      Ok, my last post was bad advice. Well, maybe good advice but for the wrong reasons. I ran your orignal code with just a minimum of changes and it seems to fork ok. Do you have a sample case that actually does something yet which breaks as you describe?

      s//----->\t/;$~="JAPH";s//\r<$~~/;{s|~$~-|-~$~|||s |-$~~|$~~-|||s,<$~~,<~$~,,s,~$~>,$~~>,, $|=1,select$,,$,,$,,1e-1;print;redo}
Re: Fork issues
by wazoox (Prior) on Feb 13, 2008 at 14:27 UTC
    Running in the background is quite different from "daemonizing" as stated by the other posters. You can use a simple daemonize function call in your code, though:
    sub daemonize { # usage: daemonize( [errorlog path, activitylog path] ) my $errlog=shift; my $actlog=shift; $errlog ||="/dev/null"; $actlog ||="/dev/null"; chdir '/' or die "can't chdir to root : $!"; open( STDIN, '</dev/null' ) or die "can't redirect STDIN : $!"; open( STDERR, ">>$errlog" ) or die "can't redirect STDERR : $!"; open( STDOUT, ">>$actlog" ) or die "can't redirect STDOUT : $!"); fork and exit; return 1; }

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://667683]
Approved by ikegami
and one hand claps...

How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (9)
As of 2018-03-22 20:23 GMT
Find Nodes?
    Voting Booth?
    When I think of a mole I think of:

    Results (286 votes). Check out past polls.