Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Devel::Cover tutorial wanted

by qq (Hermit)
on Apr 05, 2005 at 11:23 UTC ( [id://444933]=perlquestion: print w/replies, xml ) Need Help??

qq has asked for the wisdom of the Perl Monks concerning the following question:

I've got a medium sized mod_perl application that uses CGI::Application. I'm wanting to increase the test coverage so I installed Devel::Cover.

I'm having a hard time understanding what to do with it, however.

Is there a tutorial somewhere (not Devel::Cover::Tutorial, which is really advice about coverage)? Googling, super search, and the perl-qa mailing list archives provided no clues.

In more detail: I have a directory structure like:

lib/Foo/Bar/Baz.pm lib/Foo/Bar/Quux.pm lib/Foo/Bar/test.pl - just runs Test::Harness runtests lib/Foo/Bar/t/91.t docs/cgi-bin/a_webapp.pl docs/cgi-bin/another_webapp.pl

The Foo/Bar levels are essentially empty, which is why the t/ dir sits at the bottom, rather than at the top.

I run it like:

> perl -MDevel::Cover=-select,Foo lib/Foo/Bar/test.pl > cover -report html -outputdir /somewhere

Question 1: Should I run the test.pl, webapp.pl, or embed it in the mod_perl handler?
I believe that test.pl is right, because Devel::Cover is essentially telling me which bits of compiled code have been run when I call the script. And what I want to know which have been run by the tests.

However that gives me very little result - just those for the script itself, wheras runnig webapp.pl gives a much more complete picutre, including coverage for all the included modules, etc.

Or, I could run it on individual *.t files and merge the coverage databases ...

Question 2: How do I interpret the results?
The resulting reports have columns for statement, brand, condition and subroutine, next to the line of code. See a n example from Paul Johnson's site. What do the linked numbers mean? Are they the number of times the loc was encountered?

Sorry for the the somewhat vague questions. This looks like a wonderful tool, but I'm feeling my way around in the dark and I don't even know what I'm looking for. Any general advice on its use is appreciated.

thanks, qq

Replies are listed 'Best First'.
Re: Devel::Cover tutorial wanted
by dragonchild (Archbishop) on Apr 05, 2005 at 12:39 UTC
    Devel::Cover tells you how much of your code your tests are exercising. So, if you don't hook it into your tests, how is it supposed to do its job?

    Personally, I think your layout is borked. There is no reason to put your t/ directory under lib/Foo/Bar, just because the Foo/Bar levels have no files. Put it as a peer to lib/. So, that way, if you put your t/ there, your cover_db will also be a peer to lib/, which is a good idea. (Also, I'm a little curious why you consider your CGI scripts to be documentation in docs/ as opposed to executables in scripts/ or bin/.)

    Now, the numbers being linked are percentages. In other words, what percentage of the branches/conditions/etc. that this "line" has were exercised? But, you can ignore those for now.

    Right now, you need to look at the summary page. The important thing here are the boxes which aren't green or white. Those are the ones you need to look at.

    • There's no POD, which may or may not be a bad thing. (I've opened a bug against Devel::Cover because it doesn't allow parameters to Test::Pod::Coverage.)
    • Only 3/4's of the subroutines are being exercised. This is a no-brainer. You should always have a 100% in this category.
    • The branches, conditions, and statements - I would strive for higher.

    Now, when you strive for higher in the branches/conditions/statements, that's when you need the linked-to items. You look at those and where it's red, that tells you what you're not testing. For me, I often test my expected cases, but don't always test my failure cases. So, a lot of the code I have to make sure bad things don't happen never gets tested, which shows up in my coverage. I find that if you test all your happy-day scenarios and the basics of your sad-day scenarios, you'll get to 90%+ very quickly.

      Thanks for your clear response.

      Devel::Cover tells you how much of your code your tests are exercising. So, if you don't hook it into your tests, how is it supposed to do its job?

      Agreed - but as noted, running against the test harness script gives no practical output. I can do it against each .t script individually, but that's a pain. When you run use Devel::Cover, do you run it against a script that calls all your tests, or some other way?

      Personally, I think your layout is borked. There is no reason to put your t/ directory under lib/Foo/Bar, just because the Foo/Bar levels have no files.

      I'll move up the t/ directory. The docs/ is a typo in my node. The real dir is called document_root/, which isn't exactly inspired itself.

      Update: clarified docs/ layout

        Since you are running under mod_perl you need to tell Devel::Cover to look at the code that's being run by apache. Put something like this into your startup.pl. (Or modperl_extra.pl if you are using Apache::Test which I recommend anyway for testing a mod_perl application).
        use Devel::Cover ('+select' => 'My/Module', '+silent');
        Then I would recommend that you look at using something like Module::Build to create your test harness. With it, you can simply run make testcover and it will take care of making sure the test harness has Devel::Cover enabled.
        . . . running against the test harness script gives no practical output.

        How are you doing this? What does your test.pl look like? Is there a reason you're not using prove or 'make test'?

Re: Devel::Cover tutorial wanted
by leriksen (Curate) on Apr 06, 2005 at 02:45 UTC
    OK, firstly, dont be nervous.

    You should take great heart from the fact that you want to do testing, and that you want to get a great deal of value and improvement in your codes quality from that testing - bravo, that makes you at least twice as good a developer than most, IMO.

    I do recommend you wrap your app and testing up in the standard ExtUtils::MakeMaker framework. Apart from the fact that Devel::Cover (D::C) works very well with it, you do get a lot more benefits - manifests, distribution, etc.

    I do have a basic tutorial on writing a simple Makefile.PL and running tests at Perlmeme.

    But it doesnt go into coverage testing. So to get D::C to report on your coverage from a 'make test' invocation, do this
    HARNESS_PERL_SWITCHES=-MDevel::Cover make test

    Note that if you have more than one t file, D::C will merge the results for you, in answer to one of your questions

    After that has run, you really should then run the reporting tool in D::C called 'cover' and use your favourite browser to look at the 'coverage.html' file that 'cover' generates. I usually delete old coverage html reports before each run of D::C, so my command line for a coverage test usually looks like this
    cover -delete && HARNESS_PERL_SWITCHES=-MDevel::Cover make test && cover
    (actually I use an alias of 'ccc')

    (NB: if 'make test' reports a failure, that last 'cover' command wont run, so you may have to do it by hand or replace the last && with ||)

    Which finally brings us to interpreting the numbers.

    Hopefully you see the following columns in the coverage.html
    file,stmt,branch,cond,sub,time,total
    The File column lists the files that D::C 'instrumented', that is, the files it 'measured', for want of a better word. For each of these, the stmt column reports the percentage of executable lines actually run, so if it reports '50%' and your code has 1000 executable lines, your tests only ran 500 statements. Whatsmore, by clicking on that files name (the file names are links to a more detailed report), you can see exactly which lines have and have not been executed.
    The same basic idea follows for cond, branch and sub - if your file has 10 subroutines, and you execute only 5 in your tests, you see '50' in the sub column for that file s line in the coverage.html report. If you click on any of these, you get a more detailed report on what has and has not been tested.

    Note that the individual file reports do list the number of times each line was run, and the same number as a link in the sub column jumps to the page showing which subs have and have not been run - not sure why it does that.

    The times column is relative time, as a percentage - probably not to useful, except to show where most of the time testing was spent - I wouldn't rely on it for any performance profiling or benchmarking.

    Generally, it is not too hard to get 100% coverage of sub and stmt columns, and quite hard to get 100% in the branch and cond columns.

    Sometimes though, no matter how hard you try, you cannot provoke a line to be executed, or a branch to be followed - this may be a sign that the unexecuted code can never be executed - for example

    if ($x < 5 and $x > 5) { wowza(); }

    Now this is exactly the kind of thing coverage testing is good at showing - some obviously wrong logic, that results in some code never getting executed. When you come to the conclusion that the existing logic can never be satisfied, you need to make a decision as to whether to change or remove the logic - and of course, write a some tests to prove the decision is correct.

    Please keep in mind that D::C is still in beta, and it does prefer to play with a fairly recent perl, so sometimes you just have to accept that D::C is wrong. Currently I have problem where 'make test' reports 100% pass rate, but 'make test' under D::C has one test file die in a funny way, and hence the pass rate < 100%. It can be quite hard to find what it is that D::C doesnt like about your code - the perl QA mailing list can be your friend in cases like this.

    I wrote a meditation on things I learned in getting some modules to have 100% coverage.

    Also, the phalanx project is trying to get the 100 most popular CPAN modules to have 100% coverage.

    ...it is better to be approximately right than precisely wrong. - Warren Buffet

A reply falls below the community's threshold of quality. You may see it by logging in.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://444933]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (6)
As of 2024-04-18 08:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found