Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight
 
PerlMonks  

debugging a race

by geoffleach (Scribe)
on Nov 24, 2011 at 18:08 UTC ( [id://939928]=perlquestion: print w/replies, xml ) Need Help??

geoffleach has asked for the wisdom of the Perl Monks concerning the following question:

I'm debugging the tests of a large package of modules that interface to an even-larger C++ package. It was going well until some previously-passing tests began to fail. I profiled a test in an attempt to locate the failure, and --surprise-- the test passed. From this I conclude that there's a race condition somewhere.

No threads.

Given that races tend to be highly context sensitive, I'm looking for hints.

Here's the test driver

use Audio::TagLib; $proto = 'Audio::TagLib::ID3v2::Frame'; foreach $method (qw{toString}) { $return = $proto->can($method); print "$method: ", defined $return ? 'ok' : 'not ok', "\n"; }
Result is "not ok" . With -d:DProf its "ok"

My perl5 (revision 5 version 12 subversion 4) configuration:

osname=linux, osvers=2.6.32-131.2.1.el6.x86_64, archname=i386-linux-thread-multi

Replies are listed 'Best First'.
Re: debugging a race
by Anonymous Monk on Nov 25, 2011 at 01:43 UTC

    Why do you believe it is a race condition (particularly since you're not using threads), rather than something else, such as an execution environment difference?

Re: debugging a race
by sundialsvc4 (Abbot) on Nov 25, 2011 at 14:32 UTC

    I agree.   This could be a red herring assumption.   Start walking back through the commits in the version control system until you return to a point where the tests stop failing.   Check-out that version to confirm that they have stopped.   Now, compare the test scripts to make sure that they haven’t changed, and look over the tests that they perform to be sure that they still make sense in terms of the changes that have happened.   Now examine the source-code changes.   (This is one reason for the rune, “commit often.”)

    The git version control system even provides a bug-chasing “binary search” to help you zero in on “the commit” that caused a particular malfunction to surface.

    Unfortunately, debugging environments of various sorts are very often very different environments from the regular execution environment.   Software often runs very, very differently ... which in my experience makes them not-so-useful except for microscopic troubleshooting.

      Thanks for the two replies.

      My assumption that there was a race somewhere was based on the fact that the problem went away when I ran the test under profiling.

      Otherwise, I have a baseline on which the test I quoted runs correctly. I'm n the process of walking back module changes, but its painful, because failures are not consistent. By that I mean that the test will pass, and then somewhat later, under what appear to be the same conditions, will fail.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://939928]
Approved by Eliya
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (6)
As of 2024-04-19 11:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found