http://www.perlmonks.org?node_id=1000053


in reply to Re^7: PANIC: underlying join failed threded tcp server
in thread PANIC: underlying join failed threded tcp server

Alrighty, took a bit longer to recreate this time for whatever reason. I made the changes you suggested, but in the line printf( "ITFREE: thread handle:%x thread-id: %dx\n", thread->handle, GetThreadId( thread->handle ) ); i changed thread->handle to just 'handle' since it looks they already freed that pointer at that point.
I attached the log of this run below in the files called serverOutput2.txt, it exited with a different error this time,
Join failed with 'Inappropriate I/O control operation' : 'The handle i +s invalid' at rxd.pl line 1128. The RXD server has been shutdown.Perl exited with active threads: 151 running and unjoined 4 finished and unjoined 0 running and detached
Which again appears to be related to the thread handle.
Sorry about the complicated protocol. People I work with did not wish to go through the trouble of having much back and forth communication between the client/server, and rather just send command once, receive response, and still needed a way to transfer a 400MB file. So this is what we (I) came up with (you should have seen the earlier version). I've attached the client (rx.pl) as well as the test script i used to recreate this issue. (i also included the server, rxd.pl, with all other commands besides exec stripped out except for EXEC to shorten the code. And included the exact threads.xs used to compile the threads module)
Thanks again for the help. Files:
https://dl.dropbox.com/u/19686501/perlmonk.zip

Replies are listed 'Best First'.
Re^9: PANIC: underlying join failed threded tcp server
by BrowserUk (Patriarch) on Oct 20, 2012 at 05:37 UTC

    Okay. I think I have a handle on what is happening. The short explanation is that you are simply running the OS out of resources.

    2000 concurrent threads, each starting a console session doing a dir -- 100 of which are recursive from root -- consumes prodigious amount of resources.

    Your use of a VM maybe a contributory factor; I cannot reproduce the error here. My system grinds to a near complete halt for an extended period, but once the 1900 dirs of the current directory finish and 1900 cmd.exe's & 1900 threads & 1900 tcp connections go away, my system returns to a responsive state and it is then just a case of waiting while the 100 dir/s c:\ finish recursing the 212,000 directories and 1.5 million files on my hard drive, and then for all that data to get wrapped up in your protocol, shipped back to the receiving processes, unwrapped and output to the terminal.

    But it works. It eventually completes okay; which I find quite remarkable and makes me think Ithreads -- on windows at least -- is in remarkably good fettle.

    I do not believe that the problem you are seeing is a Perl issue; but rather an OS issue where -- under extreme resource depletion -- it is dropping/forgetting kernel thread objects that have completed before perl gets the chance to wait for them. I don't believe that should happen under normal circumstances, but these are not normal.

    Why "believe" this and "believe" that!

    There is a possibility that the trace output we produced is lying to me. For simplicity, the trace I had you add to threads.xs is crude -- and flawed. Using printf from multiple threads in C, is subject to the same problems of buffering and overlapping as print/printf are from multiple threads in Perl. It needs to be serialised. It is possible the symptoms I am seeing in the trace output you supplied -- Ie. A thread created that has become a non-thread by the time Perl tries to join it:

    9408: ITCREATE: thread handle:2ca0 thread-id: 3620x ... thread handle:2ca0 thread-id: 0x GetLastError output: '6'

    Is a symptom of overwritten buffered IO, rather than an OS "quirk". To counter that possibility, I've re-written the tracing code and wrapped it in a critsec to (attempt to) preclude that possibility. What that means is I am going to ask you to replace your threads.xs with this version:

    I was going to post it above, but it is too big; perlmonks won't accept it. You'll need to /msg me an email ID so I can send it to you.

    And re-build/install it. Then re-create your problem one more time. If the new install goes well, you should see:

    *** CritSec initialised *** RXD+ had been started on port 1600 ...

    when you start rxd.pl.

    If you are successful in re-creating the failure, the trace output should be more reliable.

    I also suggest the following 1-line change to rxd.pl which whilst it won't cure the problem; should make it less likely to occur -- assuming I've diagnosed the problem correctly. The change is to severely reduce the stack size allocated to each thread:

    use warnings; use threads stack_size => 4096; use threads::shared; use IO::Handle; use IO::Socket::INET; use File::Find; use File::Path; use Digest::SHA; ...

    Note: Anecdotal evidence suggest that this does not work under (some versions of) *nix. If that is one of your targets. (It might work with 64k rather than 4k, but that is a guess! I've never had any feedback to confirm or deny that.)

    A long term fix

    Finally, I think that the real fix for the problem -- assuming we can confirm my diagnosis -- is to limit the number of concurrent clients to some sane number. On windows, with the stack_size fix above, a moderately specified VM -- say 8GB memory -- should handle 100 concurrent clients okay. You'll need to tweak that number for your target environment.

    How I would implement that limiting is in the following code:

    ... unless ($client = $lsn->accept) { tprint ("Could not connect to socket: " . $!); next; } if( threads::list( threads::running >= 100 ) { $client->shutdown( 2 ); $client->close; tprint( "Client $client rejected; too many concurrent clients. +" ); next; } ...

    You might want to defer the rejection until you've accepted and validated the transmitted command and return a rejection/retry notification at that point if there are still too many concurrent clients.

    {Thwack!} Balls in your court :)

    Update: BTW, I also reduce testrdx.pl to this:

    Which both reduced system resource usage (by doing away with the threads waiting on clients) on my single machine tests and control the number of concurrent clients.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    RIP Neil Armstrong

      Alrighty, reran the tests with the new threads.xs file you sent me. The first time with the default thread stack size, and the second time with the value of 4k you suggested. The second time did take considerably longer to die, but as you said it would only make it less likely to occur (which seems to be the case). https://dl.dropbox.com/u/19686501/perlmonk/logs.zip

      I like your solution to limiting the number of connections. Had to make a slight tweak so threads would still be joined, but that seems to be working well..ish. The current VM I'm testing on has only 4GB of RAM, and doing my usual tests, the thing would still occasionally crash with the same message even reducing it to 50 threads. Tried 30, and it seems to be going ok. I was just wondering if there was any logic that went behind your suggestion of 100 threads for 8GB of memory if that was just a rough estimation?

      Thanks again for all your help, I'll post again if I run into the same problem

        I was just wondering if there was any logic that went behind your suggestion of 100 threads for 8GB of memory if that was just a rough estimation?

        A simple guestimation based upon my observation that on my system, each client requesting a dir /s c:\, required around 50/60 MB in order to accumulate all the output, wrap it up and forward it to the client. 100 * 60MB ~= 6GB leaving some headroom for other stuff. Also, remember that there is a fixed overhead for the OS, so 100 on 8GB might well translate to < 50 on 4GB.

        I think that the real resource problem with your server/protocol is the need to accumulate all the output at the server prior to returning it to the client, forced on you in part by your use of backticks to execute the command.

        If you used a piped-open and returned the output to the client line by line as you get it:

        # $resp = `$rxdArgs 2>>&1`; my $pid = open my $PIPE, '-|' qq[ $rxdArgs 2>>&1 ] or + die $!; while( <$PIPE> ) { returnOutputToClient( $_ ); }

        then your server memory usage would be cut to a 1/10th of its current requirements with (hopefully) pro-rata benefits to the number of concurrent clients you could handle. But I realise that would require a substantial re-working of both your server processing and the communications protocol.

        The upside of the change would be that your server's concurrent client limits would be independent of commands they are running (and volumes of output they produce), as you would only cache a single line at the server. It would also allow your clients to start seeing the output from their interactions in much closer to real time. And potentially even interrupt that output if they've seen enough.

        Also, transmitting the retrieved output line by line would have far less impact upon the network infrastructure than returning it in one huge chunk.

        I also wonder if you have the possibility to try your tests on a real machine rather than a VM? I suspect that if you did, you would see far fewer of these kinds of "mysterious OS problems". That based on my own observations of weirdnesses with code running in VMs.

        You might also consider upgrading the OS. WS-2003 predates most of the rise and rise of VMs, and I'm sure that the use of VMs has highlighted (and hopefully caused to be fixed) many dubious practices in the earlier kernels. WS-2010 might be more stable in that environment.

        In a similar vein, I found far fewer problems running VMs under Vista than I did under XP. And more modern processors with the various levels of VT-x/AMD-V extensions are less prone to such mysteries than older ones.

        Thanks again for all your help, I'll post again if I run into the same problem

        You're welcome and good luck. (And it is always nice to get feedback:)


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        RIP Neil Armstrong