http://www.perlmonks.org?node_id=1138589

SBECK has asked for the wisdom of the Perl Monks concerning the following question:

In the past year, I was introduced to two new tools I hadn't previously been aware of (Devel::Cover and Travis CI) that I am now using for my modules, and I was just wondering what other tools might be out there that I could benefit from.

What I'm looking for are tools that will improve the overall quality of my modules in terms of usability, readability, completeness, or whatever other metric. I looked around the monestary and didn't find such a list... after some feedback, I'd be hapy to add it as a tutorial.

Tools that I use now are listed below. I know that many of these are pretty obvious, but perhaps for someone just starting out, they should be included. What am I missing?

Update: I'm going to add the suggestions to the list as they come in, so I don't necessarily use all of them... and of course, not every tool will fit everyone's needs and/or wants, but they are a great place to start looking.

Change Kwalitee
Make sure that the change file follows the standard format.
CPAN Testers
To see which platforms the tests succeed/fail on.
CPANTS Kwalitee
Reports on the quality of the module by checking for a number of best practices.
Devel::Cover, cpancover.com
These can be used to make sure that every line in your module is covered by at least one test in the test suite.
Devel::NYTProf
For profiling where the time is being spent in order to speed things up.
Perl::Critic
Check to make sure that the code matches the best practices described in Damian Conway's Perl Best Practices.
Perl::Tidy
To make sure that a module is nicely indented and uses some of the best practices for coding style.
Pod::Spell
A spell checker for Pod files.
Release::Checklist
A check list of things to look at when releasing a new module.
Task::Kensho
A list of recommended perl modules. Especially useful are Task::Kensho::ModuleDev and Task::Kensho::Testing which contain modules recommended for development and testing.
Test::Pod, Test::Pod::Coverage
These are used to make sure that no pod files are missing and that they cover all of the functions in a module.
Travis CI
Tool which is combined with GitHUB to automatically make sure that ever new checkin passes all tests on a number of different perl versions.

The tool I wish I had the most, but don't (to my knowledge) would be a place where I could log in to and select the OS, version of perl, and version of any prerequisite modules in order to debug a test from the cpantesters site. If this exists and I don't know about it, please fill me in!

Replies are listed 'Best First'.
Re: Improving the quality of my modules
by stevieb (Canon) on Aug 14, 2015 at 15:09 UTC

    Nice list. Here's another one you might add... Perl::Critic

    I too have frequently desired a place where I can throw a new module at to see test results, instead of uploading a new version to CPAN.

      Thanks. I saw that some time ago but couldn't remember what it was called. I'll definitely add it to the list.
Re: Improving the quality of my modules
by Tux (Abbot) on Aug 14, 2015 at 15:14 UTC

    I recently started Release::Checklist. It is far from complete. Use README.mdChecklist.md to see the current state.

    All feedback welcome.


    Enjoy, Have FUN! H.Merijn

      Super! I've had ideas along this path but never acted on them. Keep it going!

Re: Improving the quality of my modules
by Athanasius (Bishop) on Aug 14, 2015 at 15:23 UTC
      Agreed... even though none of my modules made the recommended list. :-)
Re: Improving the quality of my modules
by Ravenhall (Beadle) on Aug 14, 2015 at 15:32 UTC

    One that I use a lot (and encourage others to use) is Perl::Critic. For those for aren't aware, it is a static source code analyzer. It critiques your code against best practices and recommendations from both the Perl community and Damien Conway's excellent book Perl Best Practices.

    <rant>A common criticism of Perl::Critic I've heard before is that some people disagree with this or that default policy. So for those folks I recommend Perl::Critic::Lax, which has policies that get Perl::Critic to loosen its tie a bit . There are also 167 modules in the Perl::Critic namespace, many of which are collections of policies and 65 in the Perl::Critic::Policy sub-namespace itself. Chances are that there's a policy in there that might scratch your itch. Failing that they can always RTFM and learn to make their own policies.</rant>

    I have found static source code analysis to be a great tool when beginning work on a very large codebase. It helps point out things that could very well be long-standing bugs of which the team working on the code may not even be aware. It also helps me zero in on areas of the code that may have only been put through perfunctory testing that may be in need of extra attention. I highly recommend trying it out if you've never used it.

      If you are using Test::Perl::Critic, please be sure to make the tests only run if some environment variable such as RELEASE_TESTING is set. There are a couple of reasons for this.

      First, Perl::Critic takes time, and what it tests is not likely to actually change from the time you test your release to the time it gets on a user's system. So there's no good reason to tie up user install time testing what cannot have changed since you built the distribution.

      Second, it is possible that others have a global Perl::Critic config file set that alter what Perl::Critic looks for. You could discover your tests are suddenly failing on those user's systems, not because the code has changed, but because the test's behavior has changed. Conversely, if you have your own .perlcriticrc, and if it doesn't ship with the distribution, then what you are testing will again be different from what the tests do on a typical user's system.

      For these reasons it's wise to not cause a test suite failure based on Test::Perl::Critic running on user's systems. The best approach is to only run it when you are preparing a release.


      Dave

        This is good advice. It's why I run it on the side outside of the test suite.

Re: Improving the quality of my modules
by eyepopslikeamosquito (Chancellor) on Aug 15, 2015 at 03:49 UTC
Re: Improving the quality of my modules
by BrowserUk (Pope) on Aug 15, 2015 at 02:16 UTC

    Sorry, but I have to ask. From what I know of the modules you've listed, none of them will play any part in the runtime function, utility or effectiveness of your modules.

    So how do you perceive that they will "improve the quality of your modules"?

    All of the modules you've listed may -- and I heavily qualify that 'may' -- have some benefit for you as the maintainer of your code; but none of them -- with the possible exception of Devel::NYTProf -- will benefit the users of your code.

    As someone who is fairly intimately familiar with the details and quality of your code; and your work ethic; I really wonder if your adoption of these tools will simply divert your attention from producing code that solves many peoples problems; to code that serves only to satisfy the arbitrary and capricious 'rules' of dumb (as in unthinking, inflexible) robotic tests that serve only to dogmatic compliance than user-felt improvements to the actual, runtime code?


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.
    I'm with torvalds on this Agile (and TDD) debunked I told'em LLVM was the way to go. But did they listen!

      Thanks for the compliment. I certainly remember the conversation we had before. One of my most useful perl monks interactions!

      I think that some of the tools on this list can distract from work (especially if you took an 'I am going to use every single one of them' approach). For example, I don't personally find that Perl::Critic is very helpful to me (and I mostly agree that following it would be to satisfy someone arbitrary rules many of which I do not personally agree with). And the 'Change Kwalitee' tool doesn't (IMO) contribute much at this point (though I could see how it could evolve into something a bit more useful for tracking changes), so I threw a couple of my smaller modules at it for fun, but my main modules don't use it.

      However, a number of the tools DO make some contribution to the quality of code and require very little effort to use. Travis CI is a good example. Using it, I automatically run the test suit on all versions of perl 5.6 to 5.20 without me having to do it manually. True, it hasn't actually caught anything for me yet... but someday it may. And given how easy it is to use, I consider it a useful tool. And as a side note, using Travis CI forced me to put my modules on GitHub and I've gotten 5 or 6 patches as a result. Nothing major to date, but all of them have been valid. Likewise, Test::Pod and Test::Pod::Coverage are painless to set up, and that one-time cost makes sure that my pod files are all valid and complete (and I've caught both types of problems using them prior to a new release). Pod::Spell is similarly easy to use and I run it just prior to a release to spell check my pods. So these types of tools are so trivial to use that I don't see any reason NOT to use them.

      Obviously, Devel::NYTprof is extremely valuable, but I'm finding Devel::Cover almost equally so. I've just barely started using it, and by finding placed in my code that aren't tested in my test suite, I've already found 1 or 2 very minor bugs. It may take a while, but eventually I would like to see every single line of my module get attention in a test suite. I think that would be a necessary prerequisite to having completely bug-free code.