Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask

Re: RFC: Data::Dumper::Simple

by BrowserUk (Pope)
on Jul 31, 2004 at 20:34 UTC ( #378979=note: print w/replies, xml ) Need Help??

in reply to RFC: Data::Dumper::Simple

I've become a great fan of Devel::StealthDebug (except the name).

use Devel::StealthDebug; ... #!dump( $var, \@foo, \%bas )! ... #!watch( $volatile )! ## Only traced when it changes. ... ----outputs---- $ $var = 7; $\@foo = [ 123, 456, ]; $\%bas = { 'a' => 1, 'b' => 2, }; in Test::new at line 31

Comment out the use line and all the tracing disappears. It can also be enabled/disabled by use line parameter, environment variable or presence/absence of a filename.

No run-time intrusions at all when disabled, but easily re-enabled. Watches are especially useful for cutting down the volumes of trace, though can be a little temperamental. That's only scratched the surface of it's capabilities; it also has #!assert( condition ), #!when( varname, op, value )! and #!emit("sometext")! pragmas.

It's dump() format is preferrable to most of the dumpers-that-wannabe-serializers I've tried, and it doesn't exact the huge memory overhead that serialisers require for circular reference detection when dumping complex structures.

It's also filter-based, but like you, I don't have a problem with that for debugging purposes.

Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon

Replies are listed 'Best First'.
Re^2: RFC: Data::Dumper::Simple
by Aristotle (Chancellor) on Jul 31, 2004 at 21:18 UTC

    Note that this strikes me as a much better use of a filter. It's not trying to find substitutable bits inside the source, just generating code from comments (though even being certain about what is a comment or not is not nearly trivial). I'd be even more appreciative if its directive syntax didn't look like code, particularly if it was clearly restricted to things the module can parse with 100% certainty.

    If I were to write such a module, I'd wrap the debug statements in POD, because while that is just as non-trivial as comments, pretty much all existing POD parsers misparse source in known, predictable ways.

    I don't think the debugging scenario makes brittle approaches excusable; the potential for subtle breakage introduced by filters would be doubly maddening if I'm already looking into another problem. I want to steer particularly clear of Heisenbugs in instrumentation code.

    Makeshifts last the longest.

      I think that the requirement for plings (!) at either end of each comment embedded directive make the parsing fairly unambiguous. The fact that it allows for 2 or more ##s to preceed the directive pleases me as I tend to use 2 most of the time. It enables me to slightly mis-define the comment card in my syntax highlighter which reduces the chance of it mis-recognising $#array and similar as the start of a comment.

      I don't like the POD idea, but then I am not a fan of POD anyway. The need to use 5 lines of source-space to embed a single line of POD has always bugged me immensely.

      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      "Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon

        That is indeed a pain, and a reason I don't tend to interleave POD and code. It's also not an easy problem to solve; the best I can think of is a directive that implies cutting back to code right away. That buys two lines, at least.

        Still, I can't shake the feeling that it shouldn't be hard to write a reasonably easy to use dumper that need not rely on source filters.

        Makeshifts last the longest.

Re^2: RFC: Data::Dumper::Simple
by leriksen (Curate) on Aug 01, 2004 at 13:26 UTC
    I'm still coming to a decision as to exactly how I want debug constructs in my code. Looks like I'll have to investigate adding this to my bag'o'tricks.

    Some of my code never see's the light of day.
    Some goes into highly visible (e.g. your phone bill) production system.

    I am trying to establish my own set of guidelines on code 'noise', that doesn't differ by too much across those two disciplines. I know there will necessarily need to be a difference, I want to minimise that difference.

    • I want maximum debugging during development
    • I want minimum noise during production
    • making code changes to change this behaviour is unacceptable - sometimes (client sites especially) that is just not an option, so plan ahead - don't do it.

    Currently I mainly use STDERR->print(Dumper(...)); in 'private' code, and Log::Log4perl for everything else I think might ever be seen/used by anyone else.

    All our internally developed libraries use Log::Log4perl - if someone in the company uses one of our libraries, they need to configure a logger.

    Perhaps I am not properly separating the disciplines of logging and debugging - I feel that everything your code reports back is a debug statement - to somebody at some level - some are aimed at developers, some at sysadmins.

    use brain;

      I understand your dilemma. I've being vacillating between the 'log everything in case it goes wrong' and 'turn it all off for production, cos it slows everything down, causes diskspace maintainance problems, noone ever reads them and thing you need to see is never in the log anyway' camps for 20 years.

      In the early days, processors were slow and disks were small, so logging was minimal by necessity. More recently, disk-space got cheap, compression got better and the processors faster. Extensive logging became attractive.

      I tried it on a couple of projects but came to the conclusion that for the most part extensive logging os pretty pointless. My reasoning goes like this:

      • Faster processors, bigger disks and better compression just mean that you can produce and store stuff that nobody will ever read at an even faster rate.
      • What you need is never in the logs.

        Unless you log every line, variable and every variable change, you always end up adding more logging or turning more of it on and trying to re-create the problem anyway. You might as well leave it all turned off and then turn it on when you need to.

      • In general, the more there is in the log, the harder it becomes to follow it.

        Stage 1) I want a course grained "So far, so good. So far, so good" heartbeat level across the whole application until I can track down roughly where things are going belly-up.

        Stage 2) I want to be able to turn more detail tracing on, bracketing either side of the suspected point of failure--but with everything else turned off. Otherwise you get the "can't see the wood for the trees" syndrome.

        Stage 3) I almost always want to add some extra watches or assertions to track down and the confirm the bug prior to changing the code, and reconfirming after it is fixed.

        Historically, these additions get deleted and have to be put back when the bug (or a new one) reappears. Or they get left in, commented out, and have to be manually reenabled.

      I want to be able to switch tracing on and off across a span of lines, subroutine or package. When if off, it should leave no artifacts in the code.

      Currently, D:SD doesn't easily allow this range-of-lines, or subroutine enablement, but I think that it's filter-based nature lends itself to this modification. The package-level mechanism had me foxed for a while, but thanks to theorbtwo's response to my recent question, I now think I figured out how to do this. I may have a go a tweaking D::SD for lines and subs, and if it seems to work okay, I'll offer the mod back to the author.

      The only other thing missing is a debug level selector.

      1. Heartbeat tick through out the code.
      2. Entry/exit point values.
      3. Major logic flow.
      4. Individual assertions and watches.

      More levels than that and it becomes a labour of love to categorise them--and nobody can agree on the categorisations anyway. Prefereably, the first 3 levels should be install automagically. With the fourth level evolving over time as required, but remaning in-situ in perpetuity.

      At least, that where I think I stand on the subject. Tomorrow I may vascillate again :)

      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      "Memory, processor, disk in that order on the hardware side. Algorithm, algorithm, algorithm on the code side." - tachyon

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://378979]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (7)
As of 2021-04-21 18:52 GMT
Find Nodes?
    Voting Booth?

    No recent polls found