Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Determining debug levels

by drewbie (Chaplain)
on Feb 28, 2002 at 06:59 UTC ( #148150=perlquestion: print w/ replies, xml ) Need Help??
drewbie has asked for the wisdom of the Perl Monks concerning the following question:

In the course of developing, debugging information is invaluable. But there are times when I don't want to see every possible debug message. So this requires debug levels. My question is: what guidelines you have used for determining what is logged at each debug level? I've been using numeric levels, with increasing numbers logging more data. The other end of the spectrum involves the use of text levels, such as "debug", "warning", "emergency". I'm leaning towards keeping numeric levels. But I'm interested in what other mongers have used. The issue is still open, so please convince me if you think your idea is better.

As for the actual logging mechanism, I plan to use Log::Dispatch::Config. It seems quite flexible and well designed. Flexibility is key, because on my home server I have access to the error log, while on my shared production host I have very limited access so a seperate file is easier. Thanks for all your ideas.

Comment on Determining debug levels
Re: Determining debug levels
by Biker (Priest) on Feb 28, 2002 at 08:26 UTC

    Which one of the following would give more debugging information?

    • Urgent
    • Severe
    • Fatal
    Not necessary that evident, especially to the guy who is going to maintain your code.

    OTOH, a simple guess would reveal that debug level 3 should probably give more debugging info than level 2.

    And nothing stops you from introducing debug levels 2.5 and 2.6 later on, should that prove necessary.
    Everything will go worng!

Re: Determining debug levels
by dws (Chancellor) on Feb 28, 2002 at 08:28 UTC
    My question is: what guidelines you have used for determining what is logged at each debug level?

    I've built logging into several enterprise systems to support diagnosis in the field, and have always used simple numeric levels. It's a lot more convenient to write   log("diagnostic stuff") if $DEBUG > 1;
    (or the equivalent from Log::Dispatch::Config) than it is to wrestle with symbolic names for levels.

    Stay simple, and don't get too hung up with a rigid numbering schemes. I've gotten a lot of mileage out of the following:

    • Level 1 - all user input (clicks, gestures, text input)
    • Level 2 - "i'm here" from significant subroutines, plus any SQL queries that get executed
    • Level 3 - diagnostic info from significant algorithms, plus summarized results of any SQL queries
    • Level 4 - kitchen sink
    All warning and error messages go into the log, unconditionally. Each log message is timestamped. That provides a cheap way of doing performance profiling.

    95% of the time, turning on level 1 has been sufficient to gather enough info to reproduce the problem locally, if not make it glaringly evident just what had gone off the rails and why. It's amazing how hard it is to believe that users will try certain sequences of actions, or that certain sequence are even possible, until you see hard evidence in the logs.

      Words of wisdom these are, the words of brother dws. I agree wholeheartedly. Especially the remark about logging *all* warning and error messages. In these days of cheap disk space, freely available filtering languages like our beloved Perl, and fast editors with good search commands, selective logging of warnings and errors is almost certainly the wrong choice. Remember, you can always filter downstream, but if valuable information is ignored, you will never get it back.

      And always remember you have other options than just log to a file - you can log to a database, too, and get all kinds of nifty features for free - easy reporting, triggers (like, send email if inserted record has priority level > 3 and text like "%boss%"), etc. I have seen whole system management environments being built around this idea, since it is so much more natural for most people to work with data in a database than with data in files (non-Perl people, that is :-) ... ). Heck, imagine the possibilities: you can just store a mapping of problem categories to people, and have your supporters (or even users) just get their daily reports (or triggered emails, or ... ) where they will find their problems to work on - with direct information from your applications.

      Christian Lemburg
      Brainbench MVP for Perl
      http://www.brainbench.com

        In these days of cheap disk space, freely available filtering languages like our beloved Perl, and fast editors with good search commands, selective logging of warnings and errors is almost certainly the wrong choice. Remember, you can always filter downstream, but if valuable information is ignored, you will never get it back.

        That's a good point. Big logs are much less of a problem that in bygone days.

        Still, if you go this way, be sure to do a back-of-the-envelope calculation to guestimate how much disk space your logs will grow per day. You'll also want to have some sort of log aging scheme. Having a system roll over in the field because the disk filled up can be embarrassing, especially when everyone claims it can't happen.

        Excellent idea! I could have all kinds of fun w/ logging to a database. The stuff I'm writing now is too small to be worth the effort, but I can envision all kinds of fun reports. Now that you've mentioned it. I've done some database logging in the past and it's very cool to be able to use SQL to seach for problems, rather than grep & regexes. Idea noted.
      I think you've hit the nail on the head dws! This is exactly what I was going for. I agree 100% with you on always logging warnings & errors. Right now these apps are very simple, but as they grow having discreet debug levels will help out tremendously. Thanks!
Re: Determining debug levels
by mirod (Canon) on Feb 28, 2002 at 08:53 UTC

    I have recently started to use a different technique: all the Perl files in my application start with something like:

    my %trace=( index_generation => 1, link_computation => 0, link output => 1, );

    and the log statements read like:

    print LOG "blah" if( $trace{link_outpout});

    When I see a problem in the result (I am doing mostly batch processing here, so YMMV) I can then turn on whatever trace is needed. I find this very convenient, it allows me to focus on whatever area the problem seems to be in.

    Of course this could be improved by adding all and none fields to the trace hash, so I could write:

    print LOG "blah" if( $trace{all} or ($trace{link_outpout} and !$trace{none}));

    Or even better, write a function my_log:

    sub my_log { my $type= shift; print LOG @_ if( ( $trace{all} or ( $trace{link_outpout} and !$trace{none} ) ); }

    I find this more flexible than just a numeric value, that inevitably ends up filling my log files with garbage. I guess just prefixing the log entry with the type of log and greping it could also work, but I like this way better because I can also use it when I am in debug mode under perl -d.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://148150]
Approved by root
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others taking refuge in the Monastery: (9)
As of 2014-08-22 08:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    The best computer themed movie is:











    Results (150 votes), past polls