Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Testing methodology

by BrowserUk (Pope)
on Mar 04, 2012 at 11:53 UTC ( #957767=perlmeditation: print w/ replies, xml ) Need Help??

I frequently suggest Thread::Queue for pooled thread applications, but in addition to some non-queue-like behavioural cruft, that module has no way of auto-limiting the size of the queue. That means it is all too easy to populate the queue at a rate far in excess of the pool's abilities to process those entries. And that can lead to excessive memory consumption.

Here is an implementation of a shared queue to address that deficiency:

#! perl -slw use strict; package Q; use threads; use threads::shared; use constant { NEXT_WRITE => -2, N => -1, }; sub new { # warn "new: @_\n"; my( $class, $Qsize ) = @_; $Qsize //= 3; my @Q :shared; $#Q = $Qsize; @Q[ NEXT_WRITE, N ] = ( 0, 0 ); ## nextWrite, N # warn sprintf "new: size %d\n\n", scalar @Q; return bless \@Q, $class; } sub nq { # warn "nq: @_\n"; my $self = shift; lock @$self; for( @_ ) { cond_wait @$self until $self->[ N ] < ( @$self-2 ); $self->[ $self->[ NEXT_WRITE ]++ ] = $_; ++$self->[ N ]; $self->[ NEXT_WRITE ] %= ( @$self - 2 ); cond_signal @$self; } } sub dq { # warn "dq: @_\n"; my $self = shift; lock @$self; cond_wait @$self until $self->[ N ] > 0; my $p = $self->[ NEXT_WRITE ] - $self->[ N ]--; $p += @$self -2 if $p < 0; my $out = $self->[ $p ]; cond_signal @$self; return $out; } sub n { my $self = shift; # lock @$self; return $self->[ N ]; } sub _state { no warnings; my $self = shift; lock @$self; return join '|', @{ $self }; } return 1 if caller;

Criticisms and comments on the implementation are welcome, but what I'd really like is for people to post what tests they would implement for this module, and how they would implement them.

It's a big ask I know, and there is a not-so-hidden agenda. Anyone prepared to step up?


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Comment on Testing methodology
Download Code
Re: Testing methodology
by tobyink (Abbot) on Mar 04, 2012 at 17:34 UTC

    You don't provide any documentation for the package, but a rigorous test suite would ideally start by testing that the API for the package does actually exist. i.e. the constructor constructs an object, blessed into the right package; and each public method is callable.

    Then you test the basic functionality of the package. In this case, it's a queue, so FIFO. So you'd put some stuff into it, and test that it comes out in the correct order.

    A feature of the package is that it has a limited size, so you'd want to test that this limit is enforced - i.e. try enqueuing more items than the size limit, and checking that it blocks. I haven't done any threaded programming in Perl for years, but in my limited experience I'm not sure it's possible to do that test reliably as it may introduce race conditions. Keeping the items enqueued simple (e.g. integers), and sleeping for a second before testing that the queue is blocked seems to be sufficient protection against race conditions.

    Here's a test script using no non-core modules (apart from Q.pm of course!)

    use Test::More tests => 21; # Test that Q.pm actually compiles. BEGIN { use_ok 'Q' }; # Test that the API as documented still exists. my $q = new_ok Q => [5]; can_ok $q => 'nq'; can_ok $q => 'dq'; can_ok $q => 'n'; # Thread to add some numbers to the queue. my $enqueue = threads->create(sub { $q->nq($_) for 90..99; }); # Vulnerable to race conditions :-( sleep 1; ok !$enqueue->is_joinable, '$q->nq is waiting'; # This breaks encapsulation by peeking at the internals of $q. # But we want to figure out if $q is waiting at '95'. ok !(grep { $_==95 } @$q), '95 is not on $q yet'; # Numbers come out of the thread in the correct order: is $q->dq, $_, "got $_ from queue" for 90..99; # Queue should now be empty, so not waiting for anything. sleep 1; ok $enqueue->is_joinable, '$q->nq is no longer waiting'; $enqueue->join; # Test that "dq" blocks too. my $dequeue = threads->create(sub { # Add up the numbers we get from the queue. my $sum; $sum += $q->dq for 1..10; return $sum; }); # We've not added any numbers to the queue yet, so the queue # should be waiting. sleep 1; ok !$dequeue->is_joinable, '$q->dq is waiting'; # Push some numbers into the queue. These sum to 55. $q->nq($_) for 1..10; # Queue should have recieved all the numbers. sleep 1; ok $dequeue->is_joinable, '$q->dq is no longer waiting'; my $sum = $dequeue->join; is $sum, 55, 'result of calculation performed in $dequeue is correct';

    Update: comments on the implementation are welcome...

    It would be handy to have a few additional methods:

    • length - the current number of items in the queue.
    • max_length - the maximum number of items allowed in the queue.
    • is_full - sub is_full { $_[0]->length == $_[0]->max_length }
    • peek - return the item at the head of the queue, but without dequeueing.
    • peek_all - return the entire queue as a list, without dequeueing.

    Many of the above are trivial to implement by inspecting @$q, however implementing them outside the package itself breaks encapsulation. If people using your module start relying on the internal details of how Q.pm works (that it uses an arrayref, and keeps its stats in the last two array elements, etc), this leaves you less freedom to refactor Q.pm in the future if you discover a more efficient way of doing it.

    Not only would the above make the module more testable, they'd also make it more useful. As I said, I don't know an awful lot about Perl threading, but I know a bit about parsing, which also tends to operate on a FIFO queue. Parsers for pretty much any non-trivial language have a peek_token method or two hidden away somewhere, for using tokens further up the stream to disambiguate the current token.

      Replying to the update only at this time:

      It would be handy to have ...
      • length() -- The module already has method n().

      • max_length() -- You supplied this information to me at creation time.

        It never changes. If you need it, remember it.

      • is_full() -- Show me a use-case?

        One that doesn't involve you polling this method to decide when to push a new value.

        That polling will require the queue to be locked while the value is calculated, and unlocked prior to returning the value to you. That polling will slow down every other producer and consumer. And the value returned will be out of date by the time you get it.

        Therefore there could be no guarantee that if it returns not-full, and you immediate nq(), that it won't block. The information is therefore useless to you.

        Conversely, if you just go ahead and nq(), and it needs to block, it will, and will consume no cpu until the OS wakes it when room is available. (Via cond_signal).

        I doubt you will ever find a realistic use case that will persuade me to add this.

      • peek() -- Again, you could try to convince me with a use-case, but you are unlikely to succeed.

        By the time pq() returned the next value to you, some other thread may have dq()'s it. Then what?

      • peek_all -- There is no possible use-case for this.

        There already is the private method: _state() which effectively does this. It returns the entire internal structure as a string.

        Its intended use is a debugging aid. Indeed, I added it to allow me to track down a timing issue. But even then, for it to be useful, I had to serialise all state transitions, to make it a usable diagnostic. And doing that, by necessity, slowed the throughput to a crawl.

      Not only would the above make the module more testable, they'd also make it more useful.

      Sorry, but I disagree completely with both halves of that statement.

      A queue has one purpose in life. Take things in at one end and let them out at the other as efficiently as possible.

      Compromising the function to make testing easier is not going to happen. Adding could-dos without use-cases, for their own sake, is not going to happen.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        Re max_length. Let's say I'm writing a module Amusement::Park which has a list of Amusement::Rides. Each Amusement::Ride has an associated Q. (This is a popular amusement park.) I'm writing the Amusement::Park but have no control over the Amusement::Rides, so don't initialise the Q objects. Good amusement park management says I need to point customers at rides with the least busy queues. To do this I need to calculate length max_length.

        A use case for is_full is that say I generate events of multiple levels of importance: say errors, warnings and debugging messages. If the queue is full, I might choose to silently drop debugging messages rather than being blocked. Or there might be several queues I could potentially add an item to, and I wish to choose one that is not full..

        peek/peek_all - there may be several producers but only one consumer, so no danger of peeking a value but somebody else dequeueing it.

      Firstly, as I already said. Thank you for stepping up. I note that the 'big guns' have ducked & covered, presumably keeping their power dry.

      I've spent the best part of yesterday responding to your test suite section by section, and I just discarded most of it because it would be seen as picking on you, rather than targeting the tools I am so critical off.

      1. # Test that Q.pm actually compiles.
        BEGIN { use_ok 'Q' };

        What is the purpose of this test?

        • What happens if the test is included and the Q module is not loadable?

          The Test::* modules trap the fatal error from Perl, and so the test suite continues to run, failing every test.

          Not useful.

        • What happens if we do a simple use Module; instead.

          We get two lines of output instead of 6. The lines aren't preceded by comment cards so my editor does not ignore them. The test stops immediately rather than running on testing stuff that cannot possibly pass; or is in error if it does.

          Useful.

        • What is actually being tested here?

          That perl can load a module? If it couldn't the Test::* tools wouldn't load.

          Not useful.

          That the tarball unpacked correctly? Perl would tell me that just as reliably.

          No more useful than Perl.

          That the module installed correctly? No. Because when the test suite is run, the module isn't installed. It is still in the blib structure.

          Not useful.

        • And why does it force me to put it in a BEGIN{}? Because without it, I'd have to use parens on all my method calls otherwise they may be taken as filehandles or barewords.

          Worse than non-useful. Detrimental. Extra work because of changed behaviour.

      2. # new_ok
        my $q = new_ok Q => [5];

        This apparently tests whether the return object is of the same class as the name supplied for the class. Why?

        That prevents me from doing:

        package Thing; use if $^O eq 'MSWin32' ? 'Thing::Win32' : 'Thing::nix'; sub new { $^O eq 'MSWin32' ) ? &Thing::Win32->new() : &Thing::nix->new +(); }

        Detrimental. Extra work; limits options.

      3. # Test that the API as documented still exists.
        can_ok $q => 'nq'; can_ok $q => 'dq'; can_ok $q => 'n';
        • What do get if we use this and it fails?
          not ok 1 - async::Q->can('pq') # Failed test 'async::Q->can('pq')' # at -e line 1. # async::Q->can('pq') failed

          Four lines, three of which just repeat the same thing in different words. And the tests continue despite that any that use that method will fail.

          No benefit. verbose. Repetitive.

        • And if we let Perl detect it?
          Can't locate object method "pq" via package "async::Q" at -e line 1.

          One line, no comment card. No repetition.

        Pointless extra work for no benefit.

      4. The rest elided.

      Again, thank you for being a willing subject. Now's your chance for revenge :) Take it!

      Here is my module complete with its test suite:

      #! perl -slw use strict; package async::Q; use async::Util; use threads; use threads::shared; use constant { NEXT_WRITE => -2, N => -1, }; sub new { # twarn "new: @_\n"; my( $class, $Qsize ) = @_; $Qsize //= 3; my @Q :shared; $#Q = $Qsize; @Q[ NEXT_WRITE, N ] = ( 0, 0 ); return bless \@Q, $class; } sub nq { # twarn "nq: @_\n"; my $self = shift; lock @$self; for( @_ ) { cond_wait @$self until $self->[ N ] < ( @$self-2 ); $self->[ $self->[ NEXT_WRITE ]++ ] = $_; ++$self->[ N ]; $self->[ NEXT_WRITE ] %= ( @$self - 2 ); cond_signal @$self; } } sub dq { # twarn "dq: @_\n"; my $self = shift; lock @$self; cond_wait @$self until $self->[ N ] > 0; my $p = $self->[ NEXT_WRITE ] - $self->[ N ]--; $p += @$self -2 if $p < 0; my $out = $self->[ $p ]; cond_signal @$self; return $out; } sub n { # twarn "n: @_\n"; my $self = shift; lock @$self; return $self->[ N ]; } sub _state { # twarn "_state: @_\n"; no warnings; my $self = shift; lock @$self; return join '|', @{ $self }; } return 1 if caller; package main; use strict; use warnings; use threads ( stack_size => 4096 ); use threads::shared; use async::Util; use Time::HiRes qw[ time sleep ]; our $SIZE //= 10; our $N //= 1e5; our $T //= 4; ++$T; $T &= ~1; my $Q1_n = new async::Q( $SIZE ); my $Qn_n = new async::Q( $SIZE ); my $Qn_1 = new async::Q( $SIZE ); my @t1 = map async( sub{ $Qn_n->nq( $_ ) while defined( $_ = $Q1_n->dq + ); } ), 1 .. $T/2; my @t2 = map async( sub{ $Qn_1->nq( $_ ) while defined( $_ = $Qn_n->dq + ); } ), 1 .. $T/2; my $bits :shared = chr(0); $bits x= $N/ 8 + 1; my $t = async{ while( defined( $_ = $Qn_1->dq ) ) { die "value duplicated" if vec( $bits, $_, 1 ); vec( $bits, $_, 1 ) = 1; } }; my $start = time; $Q1_n->nq( $_ ) for 1 .. $N; $Q1_n->nq( (undef) x ($T/2) ); $_->join for @t1; $Qn_n->nq( (undef) x ($T/2) ); $_->join for @t2; $Qn_1->nq( undef ); $_->join for $t; my $stop = time; my $b = unpack '%32b*', $bits; die "NOK: $b : \n" . $Q1_n->_state, $/, $Qn_n->_state, $/, $Qn_1->_sta +te unless $b == $N; printf "$N items by $T threads via three Qs size $SIZE in %.6f seconds +\n", $stop - $start; __END__ C:\test>perl async\Q.pm -N=1e4 -T=2 -SIZE=40 1e4 items by 2 threads via three Qs size 40 in 5.768000 seconds C:\test>perl async\Q.pm -N=1e4 -T=20 -SIZE=40 1e4 items by 20 threads via three Qs size 40 in 7.550000 seconds C:\test>perl async\Q.pm -N=1e4 -T=200 -SIZE=400 1e4 items by 200 threads via three Qs size 400 in 8.310000 seconds

      You'll notice that in addition to performing a default test, it can be configured through command line parameters to vary the key parameters of the test.

      The actual test consists of setting up 3 queues. One thread feeding data via the first queue to a pool of threads (1 to many). That pool dequeues the input and passes on to a second pool of threads via the second queue (many to many). And finally those threads pass the data back to the main thread via the third queue (many to 1).

      The data for a run consists of a simple list of integers. Once they make it back to the main thread, they are checked off against a bitmap tally to ensure that nothing is dequeued twice, nor omitted.

      All in one file; no extraneous modules; no extraneous output; completely compatible with any other test tools available, because it is nothing more than a simple perl script.

      Feel free to rip it to shreds.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        Yeah, I eventually realized that there was almost nothing in Test::More that I was actually using or even found to be a wise thing to use. isa_ok()? I just can't imagine that finding a real mistake. I can certainly see it complaining about an implementation detail that I might change.

        I also don't want to use is_deeply(). I think a test suite should complain about what it cares about. If there is extra stuff that it doesn't care about, then it shouldn't complain.

        And I find Test::Simple to just be a stupid idea.

        But I do use and value Test (though I use my own wrapper around it for a few minor reasons and to provide decent Lives() and Dies() implementations -- better than the several modules that purport to provide such that I've seen). I certainly make frequent use of skip()ing.

        The fact that Test doesn't use Test::Builder is a minor side benefit that becomes a major benefit every so often when I feel the need to look at the source code. Test::More's skip() is so bizarrely defined that I can't actually use it correctly without reading the code that implements it and the act of trying to find said code is so terribly aggravating since Test::Builder is involved, so I'm happy to have realized that I never have to go through that again.

        There are tons of tools built on top of TAP (and other testing schemes such as used by some of our Ruby-based tests). It is actually useful in the larger context for each individual test to get numbered so we can often correlate different failure scenarios and to make concise reports easy.

        And we have more than one test file per code file in many cases. This is especially useful when there are interesting set-up steps required for some tests. Testing leaf modules is the easiest case and usually doesn't really stress one's testing chops.

        Many of my test files abstract a few patterns of test and then run lots of simple tests that are specified with a small amount of data. So, for example, I might have a few dozen lines where each line specifies an expected return value, a method name, and an argument list (and maybe a test description).

        Also, having the test code in the same file as the code being tested would complicate coverage measurement, easily distinguishing commits that are fixing code from commits that are fixing tests, searching for real uses of a specific feature while ignoring tests that make use of it, ...

        But, no, I'm not interested in "stepping up" to your challenge. Many of my reasons would just come across as a personal attack so I'll not go into them. But most of what I'm talking about I can't demonstrate well by pasting a bit of code. I have no interest in trying such a feat.

        - tye        

        What happens if the test is included and the Q module is not loadable?

        Depends why loading fails. If Q.pm parses OK and executes OK but returns false, then the use_ok test will fail but the other twenty tests will still pass.

        None of your tests cover the situation where Q.pm returns false because you never attempt to "use Q" or "require Q".

        Nothing compels you to put a BEGIN { ... } block around it, but as a matter of style (in both test suites and regular code) I tend to make sure all modules load at compile time unless I'm purposefully deferring the load for a specific reason.

        This apparently tests whether the return object is of the same class as the name supplied for the class. Why?

        No it doesn't. It checks that the returned object is of the same class as the name supplied, or a descendent class in an inheritance heirarchy. This still allows you to return Q::Win32 or Q::Nix objects depending on the current OS, provided that they both inherit from Q.

        To have a class method called "new" in Q, which returns something other than an object that "isa" Q would be bizarre and likely to confuse users. Bizarreness is worth testing against. Tests don't just have to catch bugs - they can catch bad ideas.

        But it can catch bug anyway. Imagine Q/Win32.pm contains:

        my @ISA = ('Q');

        Ooops! That should be our @ISA. This test catches that bug.

        can_ok

        Notice none of my further tests touch the "n" method? Well, at least its existence is tested for. If for some reason during a refactor it got renamed, this test would fail, and remind me to update the documentation.

        I don't think any of your tests check the "n" method either. If you accidentally removed it during a refactor, end users might get a nasty surprise.

        A can_ok test is essentially a pledge that you're not going to remove a method, or not without some deliberate decision process.

        Use of a formalised testing framework can act as a contract - not necessarily in the legal sense - between the developer and the end users. It's a statement of intention: this is how my software is expected to work; if you're relying on behaviour that's not tested here, then you're on dangerous ground; if I deviate from this behaviour in future versions, it will only have been after careful consideration, and hopefully documentation of the change.

        ☆ ☆ ☆

        Overall most of your complaints around Test::More seem to revolve around three core concerns:

        1. Verbosity of output;
        2. That is continues after a failure has been detected rather than bailing out; and
        3. It apparently "forcing you to jump through hoops".

        Verbosity of output has never been as issue for me. The "prove" command (bundled with Perl since 5.8.x) gives you control over the granularity of result reporting: one line per test, one line per file, or just a summary for the whole test suite.

        Yes, you get more lines when a test fails, but as a general rule most of your tests should not be failing, and when they do, you typically want to be made aware of it as loudly as possible.

        The fact that test running continues after a failure I regard as a useful feature. Some test files are computationally expensive to run. If lots of calculations occur, then a minor test of limited importance fails, I still want to be able to see the results of the tests following it, so if there are any more failures I can fix them all before re-running the expensive test file.

        If a particular test is so vital that you think the test file should bailout when it fails, it's not especially difficult to add or BAIL_OUT($reason) to the end of the test.

        my $q = new_ok Q => [5] or BAIL_OUT("too awful");

        Test::Most offers the facility to make all tests bail out on failure, but I've never really used Test::Most.

        One man's "forced to jump through hoops" is another man's "saved from writing repetitive code".

        new_ok saves me from writing:

        my $q = Q->new(5); unless (blessed $q and $q->isa('Q')) { warn "new did not return an object which isa Q"; # and note that the line number reported by "warn" here # is actually two lines *after* the real error occurred. }

        Ultimately if I did ever feel like a particular set of tests wasn't a natural fit for Test::More, there would be nothing to stop me sticking a few non-TAP scripts into my distro's "t" directory, provided I didn't name them with a ".t" at the end. They can live in the same directory structure as my other tests; they just won't get run by "prove" or "make test", and won't be reported on by CPAN testers. It doesn't have be an either/or situation.

Re: Testing methodology
by Anonymous Monk on Mar 04, 2012 at 18:27 UTC
    If you want to test queue-size limits under plausible conditions set up more-than-two consumers and more-than-two producers with random rates of speed from cycle to cycle but some known to be faster/slower than the others. Provide some way for the process to know that it was or was not blocked during its last request (and test that code too). Let one thread launch this test case, wait for them to finish and then judge the scores (how much sent/received, how many times blocked due to empty or full) by some heuristic that is not absolute. Fast producers and fast consumers should have run against limits; slow consumers in the presence of (only) fast producers probably not. No outcome absolutely deterministic, but you can deduce whether the outcomes are plausible or not. Vary the above description according to exactly what your package is or is not designed to be capable of.
      ... set up more-than-two consumers and more-than-two producers with random rates of speed from cycle to cycle but some known to be faster/slower than the others.

      Thanks for the response anonymonk.

      With regard to trying to orchestrate indeterminacy. I've tried in the past and it is a sucker's game.

      The one sure-fire thing you learn about concurrency when you've done enough of it, is that you do not have to orchestrate deadlocks, live-locks, race conditions, or any of the other nasties. Run a bad system for a while and make sure plenty of other different things are happening in the same system, and the nasties will make themselves known.

      Hence, my surety against these anomalies is to run my test suite (posted elsewhere) with big numbers and then play music, watch the Iplayer, and defrag my hard disks concurrently. It is a fair bet that a more diverse set of inter-thread timings occurred during that than I could ever hope to orchestrate deliberately. If the test suite completes correctly with all that going on, it is probably bomb proof.

      A typical test run consists of this:

      C:\test>perl async\Q.pm -N=1e6 -T=400 -SIZE=400 1e6 items by 400 threads via three Qs size 400 in 811.944000 seconds

      That's 1 million items fed from 1 thread via a queue to a pool of 200 threads; those threads feed it via a second queue to another pool of 200 threads; which in turn feed it via third queue back to the main thread. At the same time I'm listening to Division Bell, whilst "Racing for Time" (movie) plays away to itself (with the volume off) in a tab in my browser. All of which simply means that my 4-cores are averaging over 90% usage each and I don't need any heat in the room despite being close to zero outside because the cpu fan is running close to flat out.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

Re: Testing methodology
by duelafn (Priest) on Mar 05, 2012 at 14:30 UTC

    Why would you not use Thread::Conveyor? As for testing your module, adapting Conveyor.t would at least be a place to start.

    Good Day,
        Dean

      Why would you not use Thread::Conveyor?

      Because I know better. Why are you suggesting it when you obviously do not?

      I was asked to update this to explain my reasoning, so here it is:

      Thread::Conveyor doesn't work. It isn't thread-safe.

      Don't believe me, try it for yourself! I'll even provide the test script for you:

      #! perl -slw use strict; use threads ( stack_size => 4096 ); use Thread::Conveyor; my $belt = Thread::Conveyor->new( { maxboxes => 50, minboxes => 25, optimize => 'memory', # or 'cpu' }); async{ print while defined( $_ = $belt->take ); }->detach; $belt->put( $_ ) for 1 .. 10; $belt->put( undef );

      Nine times out of ten this will segfault with;

      Problem signature: Problem Event Name: APPCRASH Application Name: perl.exe Application Version: 5.10.1.1007 Application Timestamp: 4b60ba96 Fault Module Name: perl510.dll Fault Module Version: 5.10.1.1007 Fault Module Timestamp: 4b60ba95 Exception Code: c0000005 Exception Offset: 000000000006c6f8 OS Version: 6.0.6001.2.1.0.768.3 Locale ID: 2057 Additional Information 1: 90e0 Additional Information 2: e939a93db866b76af40148e39e07fd0d Additional Information 3: 85c4 Additional Information 4: 6c632c487ffa8e4b9b7137dbbbe72313

      On the 10th occasion it will emit the following before hanging:

      C:\test>t-TCcrap 1 2 3 4 5 6 7 8 Thread 1 terminated abnormally: Can't use an undefined value as an ARR +AY reference at C:/Perl64/site/lib/Thread/Tie/Array.pm (loaded on dem +and from offset 1939 for 176 bytes) line 75. Terminating on signal SIGINT(2)

      And if you trace the run you get:

        Thank you very much for updating and giving your reason. I do in fact use Thread::Conveyor and had never run in to the segfault issue.

        Upon investigation, I only get the segfault when I set the stack size globally (when I load the threads module or via threads->set_stack_size(). I have no problems if I set the stack size at thread creation time (below). This explains why I have never run into trouble. Clearly, there is some problem with Thread::Conveyor and you have convinced me that there might be some reason to avoid it. Determining whether the problem is fixable is probably beyond me.

        #! perl -slw # This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi use strict; use threads; use Thread::Conveyor; my $belt = Thread::Conveyor->new( { maxboxes => 50, minboxes => 25, optimize => 'memory', # or 'cpu' }); threads->new(sub { print while defined( $_ = $belt->take ); }, { stack_size => 4096 })->detach; $belt->put( $_ ) for 1 .. 10; $belt->put( undef );

        Good Day,
            Dean

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://957767]
Approved by ww
Front-paged by ww
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (12)
As of 2014-10-23 15:58 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    For retirement, I am banking on:










    Results (125 votes), past polls