http://www.perlmonks.org?node_id=1227454

Good morning nuns and monks,

Nothing to read during next holidays? ;=)

I wrote this tutorial and I'll really appreciate your comments and corrections.

The following tutorial is in part the fruit of a learn by teaching process, so before pointing newbies to my work I need some confirmations.

The tutorial is a step by step journey into perl module development with tests, documentation and git integration. It seemed to me the very minimal approach in late 2018.

Being a long post (over the 64kb perlmonks constraint) the second part is in a reply to this node and because of this, the table of content links are broken for the second part (I'll fix them sooner or later).

This material is already on its github repository with a long name (I hope can be easier to find). Also the code generated in this tutorial has its own archived repository.

I'll gladly accept comments here or as pull request (see contributing), as you wish, about:

The module code presented is.. fairly semplicistic and I do not plan to change it: the tutorial is about all but coding: tests, documentation, distribution and revision control are the points of this guide and I tried to keep everything as small as possible. If you really cannot resist to rewrite the code of the module, rewrite it all and I can add a TIMTOWTDI section, just for amusement.

By other hand the day eight: other module techniques has room for improvements and additions: if you want to share your own tecquiques about testing, makefile hacking, automating distribution I think this is the place. I choosed module-starter to sketch out the module as it seemed to me simple and complete, but it has some quirks. Other tools examples can be worth another day of tutorial, but keep it simple.

When you have commented this tutorial I'll remove the [RFC] in the title and I'll point newcomers to this guide (or better if it will be reposted in other section?), if you judged it is worth to read.

Thanks!

L*

UPDATE 20 Dec. Added a readmore tag around the below content. The online repository is receiving some pull requests ;) so I added a version number to the doc. Tux is very busy but pointed me to Release::Checklist and I'l add it to the tutorial.


Discipulus's step by step tutorial on module creation with tests and git

day zero: introduction

day one: prepare the ground

day two: some change and tests

day three: finally some code

day four: the PODist and the coder

day five: deeper tests

day six: testing STDERR

day seven: the module is done but not ready

day eight: other module techniques

bibliography

acknowledgements

day zero: introduction

foreword

This tutorial is not about coding: that's it! The code, idea and implementation presented below are, by choice, futile, piffling and trifling ( the module resulting following this tutorial is available, archived on its own repository).

This tutorial, by other hands, tries to show to the beginner one possible path in module creation. As always in perl there are many ways to get the job done and mine is far to be the optimal one, but as I have encountered many difficulties to choice my own path, perhaps sharing my way can help someone else.

There are other similar but different source of knowledge about module creation, notably José's Guide for creating Perl modules: read this for some point i do not explore (well, read it anyway: it's worth to)

the bag of tools

As for every duty, check your equipment before starting. You probably already have perl, a shell (or something less fortunate if you are on windows, like me ;) and a favourite text editor or IDE.

But here in this tutorial we'll use git in the command line and github to store our work in a central point (very handy feature). So get a github account and a git client.

This tutorial will focus on the importance (I'd say preminence or even predominance) of testing while developing a perl module. I wrote lonely scripts for years then I realized that even if my script seemed robust, I have no way to test them in a simple and reliable way.

So we will use the core module Test::More and the CPAN one Test::Exception in our module so get it installed using your cpan or cpanm client. Take a look to Test::Simple if you are not used to test.

We also use the core module Carp to report errors from user point of view.

We use Module::Starter to have the skeleton of our module done for us, but, as always there are valid alternatives. Install it.

We'll document our module using POD (Plain Old Documentation) see perlpod for reference.

the plan

Some of your programs or modules work on lists and arrays. Functions inside these programs accept ranges but while you intend what a valid ranges is ( 0,1,2 or 0..2 ) you discover that your programs crashed many times because other humans or other programs passed ranges like: 0,1..3,2 (where 2 is present twice) or 3,2,1 (and your application is silently expecting 1,2,3 ) or 9..1 or even 0,1,good,13..15 not being a range at all, or simply 1-3 being a range for the user but not for you perl code that read it as -2.

Bored of the situation you plan a new module to validate ranges. Range::Validator is the name you choose. Your initial plan is to expose just one sub: validate

As in masonry, you need a well prepared plan before starting excavations. Then you need points and lines drawn on the terrain: everything that makes the job complex is part of the job itself.

Look around: you can bet someone else got your same idea before you. You can also bet he or she was smarter than you and it already uploaded it to CPAN.

Sharing early is a good principle: if you already have an idea of your module (even before implementing it), can be worth to ask in a forum dedicated to Perl (like perlmonks.org) posting a RFC post (Request For Comments) or using the dedicated website prepan.org(is not a crowdy place nowadays..;).

Plan it well: it is difficult, but remember that to repair something bad planned is always a worst task. The basic read is in the core documentation: perlnewmod is the place to start and perlmodstyle is what comes next. Dont miss the basic documentation.

If you want to read more see, in my bibliotheca, the scaffold dedicated to modules.

Choose carefully all your names: the module one and names of methods or functions your module exports: good code with bad named methods is many times unusable by others than the author.

Programming is a matter of interfaces. sic. dot. Coding is easy engineering is hard. sic. another dot. You can change a million of times the implementation, you can never change how other people use your code. So plan well what you offer with your module. You can add in the future new features; you cannot remove not even one of them because someone is already using it in production. Play nice: plan well.

You can profit the read of a wonderful post: On Interfaces and APIs

day one: prepare the ground

step 1) an online repository on github

Create an empty repository on the github server named Range-Validator (they do not accept :: in names) see here for instruction

step 2) a new module with module-starter

Open a shell to your scripts location and run the program module-starter that comes within Module::Starter It wants a mail address, the author name and, obviously the module name:

shell> module-starter --module Range::Validator --author MyName --emai +l MyName@cpan.org Added to MANIFEST: Changes Added to MANIFEST: ignore.txt Added to MANIFEST: lib/Range/Validator.pm Added to MANIFEST: Makefile.PL Added to MANIFEST: MANIFEST Added to MANIFEST: README Added to MANIFEST: t/00-load.t Added to MANIFEST: t/manifest.t Added to MANIFEST: t/pod-coverage.t Added to MANIFEST: t/pod.t Added to MANIFEST: xt/boilerplate.t Created starter directories and files
A lot of work done for us! The module-starter program created all the above files into a new folder named Range-Validator let's see the content:

---Range-Validator | Changes | ignore.txt | Makefile.PL | MANIFEST | README | |---lib | | ---Range | Validator.pm | |----t | 00-load.t | manifest.t | pod-coverage.t | pod.t | |----xt boilerplate.t
We now have a good starting point to work on. Spend some minute to review the content of the files to get an idea.

step 3) a local repository with git

Open another shell for the git client (I prefer to have two, feel free to use just one) to the same path of the above created folder and initialize a git repository (local for the moment) there:

git-client> git init Initialized empty Git repository in /path/to/Range-Validator/.git/
Nothing impressive.. What happened? The above command created a .git directory, ~15Kb of infos, to take track of all changes you'll make to your files inside the Range-Validator folder. In other words it created a git repository. Empty. Empty?!? And all my files?

It's time for a command you'll use many, many times: git status

git-client> git status On branch master No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) Changes MANIFEST Makefile.PL README ignore.txt lib/ t/ xt/ nothing added to commit but untracked files present (use "git add" to +track)
Many terms in the above output would be worth to be explained, but not by me. Just be sure to understand what branch, commit, tracked/untracked means in the git world. Luckily the command is so sweet to add a hint for us as last line: (use "git add" to track)

Git is built for this reason: it can track all modifications we do to code base and it take a picture (a snapshot in git terminology) of the whole code base everytime we commit these changes. But git init initialized an empty repository: we must tell git which files to add to tracked ones.

We simply want to track all files module-starter created for us: git add . add the current directory and all its content to tracked content. Run it and check the status again:

git-client> git add . git-client> git status On branch master No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: Changes new file: MANIFEST new file: Makefile.PL new file: README new file: ignore.txt new file: lib/Range/Validator.pm new file: t/00-load.t new file: t/manifest.t new file: t/pod-coverage.t new file: t/pod.t new file: xt/boilerplate.t
We added all content but we still have not committed anything! git commit -m "some text" will commit all changes using the message provided as a label for the commit (without -m git will open a text editor to enter the text). Run it and recheck the status again:

git-client> git commit -m "module-starter created content" [master (root-commit) 1788c12] module-starter created content 11 files changed, 409 insertions(+) create mode 100644 Changes create mode 100644 MANIFEST create mode 100644 Makefile.PL create mode 100644 README create mode 100644 ignore.txt create mode 100644 lib/Range/Validator.pm create mode 100644 t/00-load.t create mode 100644 t/manifest.t create mode 100644 t/pod-coverage.t create mode 100644 t/pod.t create mode 100644 xt/boilerplate.t git-client> git status On branch master nothing to commit, working tree clean
With the above we committed everything. The status is now working tree clean what better news for a lumberjack used to examine daily tons of dirty logs? ;)

Now we link the local copy and the remote one on github: all examples you find, and even what github propose to you, tell git remote add origin https://github.com/... where origin is not a keyword but just a label, a name: I found this misleading and I use my github name in this place or something that tell me the meaning, like MyFriendRepo. So from now on we will use YourGithubLogin there.

Add the remote and verify it ( with -v ):

git-client> git remote add YourGithubLogin https://github.com/YourGith +ubLogin/Range-Validator.git git-client> git remote -v YourGithubLogin https://github.com/YourGithubLogin/Range-Validat +or.git (fetch) YourGithubLogin https://github.com/YourGithubLogin/Range-Validat +or.git (push)
The verify operation gives us two hints: for the remote repository that we call YourGithubLogin we can do fetch (import all changes you still have not, from the remote repository to your local copy) or push (export your local copy to the remote repository).

Since on github there is nothing and locally we have the whole code base, we definitively want to push and we can do that if and only if, we have the permission in the remote repository. It's our own repository, so no problem (git will ask for the github password). The push wants to know which branch to push: we only have master so:

git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': *********** Counting objects: 17, done. Delta compression using up to 4 threads. Compressing objects: 100% (14/14), done. Writing objects: 100% (17/17), 5.33 KiB | 303.00 KiB/s, done. Total 17 (delta 1), reused 0 (delta 0) remote: Resolving deltas: 100% (1/1), done. remote: remote: Create a pull request for 'master' on GitHub by visiting: remote: https://github.com/YourGithubLogin/Range-Validator/pull/n +ew/master remote: To https://github.com/YourGithubLogin/Range-Validator.git * [new branch] master -> master
Go to the github website to see what happened: the whole code base is in the online repository too, updated to our last commit (aka our first, unique commit for the moment). From now on we can work on our code from any machine having a git client. To do so we must be diligent and committing and pushing our changes when is the moment, to maintain the online repository up to date. Clean yard, happy master mason.

A whole day is passed, well.. two days, and we did not wrote a single line of perl code: we are starting the right way! Time to go to sleep with a well prepared playground.

day two: some change and tests

step 1) POD documentation

Well first of all some cleaning: open you local copy of the module /path/to/Range-Validator/lib/Range/Validator.pm in your text editor or IDE. Personally I like the POD documentation to be all together after the __DATA__ token rather interleaved with the code. Inside the code I only like to have comments. POD documentation is for the user, comments are for you! After a week or month you'll never remember what your code is doing: comment it explaining what is passing.

So go to the end of the module where the line is the final 1; ( remember all modules must have a true return value as last statement) and place, in a new line the __DATA__ token. Move all POD after the token. Also cancel the POD and the code relative to function2

Then rename function1 into validate and change accordingly the name of the POD section too.

Modify the POD part =head1 NAME with a more humble and meaning description: Range::Validator - a simple module to verify array and list ranges

Change the =head1 SYNOPSIS part too, removing unneeded text and changing code lines ( see below ): we do not do an object oriented module, so no new method for us. You plan to accept both real ranges and strings representing ranges.

So, if you followed me, the module must look like:

package Range::Validator; use 5.006; use strict; use warnings; our $VERSION = '0.01'; sub validate { } 1; __DATA__ =head1 NAME Range::Validator - a simple module to verify array and list ranges =head1 VERSION Version 0.01 =cut =head1 SYNOPSIS use Range::Validator; my @range = Range::Validator->validate(0..3); # a valid range my @range = Range::Validator->validate(0..3,2); # a overlapping + range my @range = Range::Validator->validate('1,3,7'); # a valid range + passed as a string my @range = Range::Validator->validate('1,XXX,3'); # an invalid ra +nge passed as a string # more POD ...
Ok? Let's check our new POD is correct: open the shell in the directory created yesterday /path/to/Range-Validator and run the following command:  perldoc ./lib/Range/Validator.pm

Review the POD. It must be ok.

step 2) first test

Now we test if the module syntax is correct. The first simple method is a short one liner using the perl option  -I to include ./lib in @INC and  -MRange::Validator to use our module( see perlrun and perlvar ):

shell> perl -I ./lib -MRange::Validator -e 1 shell>
No errors: good! the module can be used and has no syntax errors. But.. one moment: we want to try out all our features, and we plan to add many, using one liners? Are we mad?! No; we will use tests.

Tests are wonderful in perl and planning good tests (a test suite) will save a lot of time in the future and makes your code maintainable. The time you invest writing tests while coding will save a lot of time in the future when you modify the code base. I'm not a theoric of software writing nor an orthodox of test driven development, but to write tests while you code is a very good practice. You can even write tests before coding ie: you write something that test a wanted behaviour, you run it expecting a failure, then you write the code that make the test happy. This is up to you.

What is not a choice is having no test suite or writing all tests at the end of code development. No.

In the day one we used module-starter to produce a skeleton of our module. module-starter was so kind to write a bounch of tests for us in the standard directory /t (ie tests). Tests are run normally during the installation (sorted by their names) of the module but, as we already said, they are the main source of serenity for us as developers. So let's see what module-starter wrote inside /t/00-load.t

#!perl -T use 5.006; use strict; use warnings; use Test::More; plan tests => 1; BEGIN { use_ok( 'Range::Validator' ) || print "Bail out!\n"; } diag( "Testing Range::Validator $Range::Validator::VERSION, Perl $], $ +^X" );
This perl program use strict and wanrings (you already know they are friends, do you?) then load the core module Test::More which generally requires that you declare how many tests you intend to run ( plan tests => 1 ) then inside the BEGIN block use its method use_ok that loads our own module and in case of failure print "Bail out!\n" aka "everything went wrong, leave the boat".

If the above succeeded Test::More calls diag that emits a note with the text specified, useful to have while reviewing test output. The module also has the note method that I prefer. Go to the module documentation to have an idea of Test::More

So, instead of the one liner we can safely call this test:

shell> perl -I ./lib ./t/00-load.t "-T" is on the #! line, it must also be used on the command line at ./ +t/00-load.t line 1.
The test crash because of the -T that turns taint mode on. Taint mode is base of the security in perl, but for the moment we do not need it enabled, so we remove from the shebang line which will result in #!perl (read about taint mode in the official perl documentation: perlsec).

(Note that removing -T switch is not the best thing to do: perl -T -I ./lib ./t/00-load.t is by far a better solution).

After this change the test will run as expected:

shell> perl -I ./lib ./t/00-load.t ok 1 - use Range::Validator; 1..1 # Testing Range::Validator 0.01, Perl 5.026000, /path/to/my/perl
Wow! we run our first test! ..yes, but in the wrong way. Well not exactly the wrong way but not the way tests are run during installation. Test are run through a TAP harness (TAP stands for Test Anything Protocol and is present in perl since ever: perl born the right way ;).

With your perl distribution you have the prove command (see its documentation) that run tests through a TAP harness. So we can use it.

We can call prove the very same way we called perl: prove -I ./lib ./t/00-load.t but we are lazy and we spot prove -l which has the same effect of prove -I ./lib ie include ./lib in @INC

Run the very same test through prove instead that perl and you will see a slightly different output:

shell> prove -l ./t/00-load.t ./t/00-load.t .. 1/? # Testing Range::Validator 0.01, Perl 5.026000, +/path/to/my/perl ./t/00-load.t .. ok All tests successful. Files=1, Tests=1, 0 wallclock secs ( 0.01 usr + 0.02 sys = 0.03 CPU +) Result: PASS
Basically the output includes some statistics and the count of test files processed and the overall number of tests. Also note that the message emitted by diag is in another place: diagnostics by Test::More goes to STDERR (which is buffered differently in respect of STDOUT but this is another story..) and TAP aggregates tests results and prints them to STDOUT

Finally we have the developer gratification: Result: PASS indicating all went well.

The prove program promotes laziness and without argument (as a test file in the previous example) runs automatically every test file found under /t folder: this is the same behaviour you will have during an effective module installation:

shell> prove -l t\00-load.t ....... 1/? # Testing Range::Validator 0.01, Perl 5.026000 +, /path/to/my/perl t\00-load.t ....... ok t\manifest.t ...... skipped: Author tests not required for installatio +n t\pod-coverage.t .. skipped: Author tests not required for installatio +n t\pod.t ........... skipped: Author tests not required for installatio +n All tests successful. Files=4, Tests=1, 1 wallclock secs ( 0.06 usr + 0.02 sys = 0.08 CPU +) Result: PASS

step 3) commit changes with git

Ok we have done some change to the code base, small ones but changes. Wich changes? I'm lazy and I do not remember all files we modified. No problem git will tell us. At least I remember which command I need to review the code base status: git status

Go to the git shell and run it:

git-client> git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working + directory) modified: lib/Range/Validator.pm modified: t/00-load.t no changes added to commit (use "git add" and/or "git commit -a")
Ah yes, we modified two files: not only the module also the t/00-load.t removing the -T from shebang line, thanks git and you are also so kind to give me two hints about what to do next: use "git add" and/or "git commit -a"

Go for the shorter path: we commit adding all files with git commit -a ie: we commit all files that are already tracked and eventually we remove from tracked list all files deleted in the code base. But we remember that committing needs to include a message as label of the commit: git commit -m "message" so putting all together and checking the status:

git-client> git commit -a -m "moved POD, removed -T" [master 49a0690] moved POD, removed -T 2 files changed, 20 insertions(+), 23 deletions(-) git-client> git status On branch master nothing to commit, working tree clean

step 4) pushing to github repository

Ok we submitted, well committed, all changes made. What's next? We have to synchronize the online repository that we named YourGithubLogin so check it and push modified content to it:

git-client>git remote -v YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +fetch) YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +push) git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': Counting objects: 7, done. Delta compression using up to 4 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (7/7), 870 bytes | 435.00 KiB/s, done. Total 7 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3), completed with 3 local objects. To https://github.com/YourGithubLogin/Range-Validator 1788c12..49a0690 master -> master
Go to the browser and open the online repository to see what happened after the git push: in the main page, where files are listed we spot our two modified files with a new timestamp and with the message we used when committing. Under the Insight tab and then under Network in the right menu, we can see two points connected by a line segment: this is the visual history of our repository and each commit we have done: here you will find also eventual branches, but this is another story.

Well, another day is passed without writing a single line of perl code! At least for the moment our code is 100% bug free ;) I vaguely recall a chinese motto: "when you start something, start from the opposite" or something like that. To write a robust perl module start writing no perl code, for two days!

day three: finally some code

step 1) first lines of code

It's time to put some code inside our validate subroutine. We plan to accept both a string like '1..3,5' and a pure range like 1..5,6 but let's start with string form assuming only one element will be passed to our sub via @_

Remember what said in the foreword: this tutorial is not about coding, so be merciful with following examples.

sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; } # otherwise we received a list else{ ... } return @range; }
The above is straightforward (if ugly): we get something in via @_ (a string or a list) and we return something via return @range To accomplish this we initialize $range to hold our string.

A good principle in loops is "put exit conditions early" and following this principle we put our our die conditions as soon as possible, ie after the if/else check.

But we dont want to die with an ugly message like Died at ../Range/Validator.pm line x ie from the module perspective: we want to inform the user where his code provoked our module to die.

The core module Carp provides this kind of behaviour and we use its function croak that dies from perspective of the caller.

So we add the line needed to load the module, a first croak call if the string passed in contains forbidden characters and some other line too:

package Range::Validator; use 5.006; use strict; use warnings; use Carp; # --- new line our $VERSION = '0.01'; sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; } # otherwise we received a list else{ ... } # remove any space from string $range =~ s/\s+//g; # --- new line # die if invalid characters croak "invalid character passed in string [$range]!" if $range =~ /[^\s,.\d]/; # --- new line @range = eval ($range); # --- new line return @range; } 1;

step 2) testing on our own

How to see if all works as expected? Obviously with a test. Not 00-load.t but a new one dedicated to the validate sub. So go into the t folder and create a new file 01-validate.t and open it to edit the content.

Let's populate it with a basic content plus some new stuff (01-validate.t):

#!perl use 5.006; use strict; use warnings; use Test::More qw(no_plan); use Test::Exception; use_ok( 'Range::Validator' ); ok (scalar Range::Validator::validate('0..2') == 3, 'ok valid string produces correct number of elements' ); note ("starting test of forbidden characters in the string form"); dies_ok { Range::Validator::validate('xxxinvalidstringxxx') } "expected to die with invalid character";
First of all we used a different notation for Test::More ie. use Test::More qw(no_plan)

We are telling to the module we (still) have not a plan about how many tests will be in this file. This is a handy feature.

The Test::More core module offers us ok use_ok and note methods: view the module doc for more info about them.

But we also used in the above test the dies_ok function: this one comes from the CPAN module Test::Exception and we need to add this module in to our dependencies list.

Dependencies list? What is that? Where we spoke about this? Never, until now.

step 3) add dependencies in Makefile.PL

Infact the program module-starter used in day one created a file called Makefile.PL with the following default content:

use 5.006; use strict; use warnings; use ExtUtils::MakeMaker; WriteMakefile( NAME => 'Range::Validator', AUTHOR => q{MyName <MyName@cpan.org>}, VERSION_FROM => 'lib/Range/Validator.pm', ABSTRACT_FROM => 'lib/Range/Validator.pm', LICENSE => 'artistic_2', PL_FILES => {}, MIN_PERL_VERSION => '5.006', CONFIGURE_REQUIRES => { 'ExtUtils::MakeMaker' => '0', }, BUILD_REQUIRES => { 'Test::More' => '0', }, PREREQ_PM => { #'ABC' => '1.6', #'Foo::Bar::Module' => '5.0401', }, dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', }, clean => { FILES => 'Range-Validator-*' }, );
This file is run on the target system trying to install your module. It's vaste matter and you can find many, many useful informations in the core documentation of ExtUtils::MakeMaker and in the ExtUtils::MakeMaker::Tutorial and, as always in perl, there many ways to do it.

In our simple case we only need to know few facts about BUILD_REQUIRES and PREREQ_PM fields.

The first one lists into a hash all modules and their version needed to build up our module, building includes testing, so if you need some module during tests it is the place where insert dependencies. The module-starter program added 'Test::More' => '0' entry for us. This is the right place to state that we intend to use Test::Exception CPAN module during tests.

By other hand PREREQ_PM lists modules and their minimal versions needed to run your module. As you can see it's a different thing: to run Range::Validator you never need Test::Exception but, for example you 'll need Carp

Even if Carp it's a core module is a good practice to include it into PREREQ_PM

Read a very good post about dependencies Re: How to specify tests dependencies with Makefile.PL?

Cleaning example lines and given all the above, we will modify Makefile.PL as follow:

use 5.006; use strict; use warnings; use ExtUtils::MakeMaker; WriteMakefile( NAME => 'Range::Validator', AUTHOR => q{MyName <MyName@cpan.org>}, VERSION_FROM => 'lib/Range/Validator.pm', ABSTRACT_FROM => 'lib/Range/Validator.pm', LICENSE => 'artistic_2', PL_FILES => {}, MIN_PERL_VERSION => '5.006', CONFIGURE_REQUIRES => { 'ExtUtils::MakeMaker' => '0', }, BUILD_REQUIRES => { 'Test::More' => '0', 'Test::Exception' => '0', # --- new line }, PREREQ_PM => { 'Carp' => '0', # --- new line }, dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', }, clean => { FILES => 'Range-Validator-*' }, );
So the moral is: when you add a dependency needed to run your module or to test it remember to update Makefile.PL correspondent part.

step 4) run the new test

Ok, is the above test ok? It returns all we expect? Try it using prove -l but specifying also -v to be verbose and the filename of our new test (now we dont want all test run, just the one we are working on):

shell> prove -l -v ./t/01-validate.t ./t/01-validate.t .. ok 1 - use Range::Validator; ok 2 - ok valid string produces correct number of elements # starting test of forbidden characters in the string form ok 3 - expected to die with invalid character 1..3 ok All tests successful. Files=1, Tests=3, 0 wallclock secs ( 0.05 usr + 0.01 sys = 0.06 CPU +) Result: PASS

step 5) commit, add new files and push with git

What we need more from our first day of coding? To check our status and to synchronize our online repository (pay attention to the following commands because we have a new, untracked file!):

git-client> git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working + directory) modified: Makefile.PL modified: lib/Range/Validator.pm Untracked files: (use "git add <file>..." to include in what will be committed) t/01-validate.t no changes added to commit (use "git add" and/or "git commit -a") git-client> git commit -a -m "some code into validate and modified Mak +efile.PL" [master 580f628] some code into validate and modified Makefile.PL 2 files changed, 23 insertions(+), 3 deletions(-)
We committed before adding the new file! shame on us! Add the new file and issue another commit:

git-client> git add t/01-validate.t git-client> git status On branch master Changes to be committed: (use "git reset HEAD <file>..." to unstage) new file: t/01-validate.t git-client> git commit -a -m "added 01-validate.t" [master 5083ec3] added 01-validate.t 1 file changed, 16 insertions(+) create mode 100644 t/01-validate.t git-client> git status On branch master nothing to commit, working tree clean
What more? Ah! pushing to the online repository:
git-client> git remote -v YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +fetch) YourGithubLogin https://github.com/YourGithubLogin/Range-Validator ( +push) git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': ********* Counting objects: 10, done. Delta compression using up to 4 threads. Compressing objects: 100% (8/8), done. Writing objects: 100% (10/10), 1.31 KiB | 447.00 KiB/s, done. Total 10 (delta 5), reused 0 (delta 0) remote: Resolving deltas: 100% (5/5), completed with 4 local objects. To https://github.com/YourGithubLogin/Range-Validator 49a0690..5083ec3 master -> master
What a day! We added six lines of code and an entire test file! Are we programming too much? Probably no but we are doing it in a robust way and we discovered it can be hard work. In perl hard work is justified only by (future) laziness and we are doing all these work because we are lazy and we do not want to waste our time when, in a month or a year, we need to take this code base again to enhance it or to debug it. So now it's time for the bed and for deserved colorful dreams.

continue..

Replies are listed 'Best First'.
[RFC] Discipulus's step by step tutorial on module creation with tests and git -- second part
by Discipulus (Canon) on Dec 19, 2018 at 11:16 UTC

    day four: the PODist and the coder

    step 1) the educated documentation

    We get up in the morning and we suddenly realize yesterday we forgot something very important: documentation! Good documentation is like an educated people, while poor documentation is like a boor one: who do you prefer to meet?

    The same is for your module users: they hope and expect to find a good documentation and to write it is our duty. Dot.

    Documentation content, in my little experience, can be impacted a lot for even small changes in the code or interface so, generally I write the larger part of the docs when the implementation or interface is well shaped. But, by other hand, a good approach is to put in the docs every little statement that will be true since the very beginning of your module development. At the moment we can state our validate sub accepts both strings and ranges and always returns an array.

    At the moment the relevant part of the POD documentation is:

    =head1 EXPORT A list of functions that can be exported. You can delete this section if you don't export anything, such as for a purely object-oriented mod +ule. =head1 SUBROUTINES/METHODS =head2 validate =cut
    We do not plan to export functions: our sub must be called via its fully qualified name, as we do in the test we created: Range::Validator::validate() so we can delete the EXPORT part and add something in the subroutines part:

    =head1 SUBROUTINES =head2 validate This function accepts a string or a list (range) and returns an array. In the string form the accepted characters are: positive integers, dot +s, commas and spaces. Every space will be removed.
    We do not need =cut anymore because we do not have POD in blocks interleaved with the code, but an unique POD block in the __DATA__ section.

    step 2) git status again and commit again

    Since we are now very fast with git commands, let's commit this little change; the push to the remote repository can be left for the end of work session. So status (check it frequently!) and commit

    git-client> git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working + directory) modified: lib/Range/Validator.pm no changes added to commit (use "git add" and/or "git commit -a") git-client> git commit -a -m "initial POD to document validate functio +n" [master a6dc557] initial POD to document validate function 1 file changed, 5 insertions(+), 6 deletions(-)

    step 3) more code...

    Now it's time to add more checks for the incoming string: we do not accept a lone dot between non dots, nor even more than two dots consecutively:

    # not allowed a lone . croak "invalid range [$range] (single .)!" if $range =~ /[^.]+\.{1 +}[^.]+/; # not allowed more than 2 . croak "invalid range [$range] (more than 2 .)!" if $range =~ /[^.] ++\.{3}/;
    The whole sub now look like:

    sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; } # otherwise we received a list else{ ... } # remove any space from string $range =~ s/\s+//g; # die if invalid characters croak "invalid character passed in string [$range]!" if $range =~ /[^\s,.\d]/; # not allowed a lone . croak "invalid range [$range] (single .)!" if $range =~ /[^.]+\.{1 +}[^.]+/; # not allowed more than 2 . croak "invalid range [$range] (more than 2 .)!" if $range =~ /[^.] ++\.{3}/; @range = eval ($range); return @range; }

    step 4) ...means more and more tests

    Now it is time to test this behaviour. Go edit ./t/01-validate.t adding (append the following code to the end of the test file) some dies_ok statements preceded by a note:

    note ("start checks about incorrect dots in string"); dies_ok { Range::Validator::validate('1.2') } "expected to die with a lone dot"; dies_ok { Range::Validator::validate('0..2,5.6,8') } "expected to die with a lone dot";
    Run the test via prove

    shell> prove -l -v ./t/01-validate.t ./t/01-validate.t .. ok 1 - use Range::Validator; ok 2 - ok valid string produces correct number of elements # starting test of forbidden characters in the string form ok 3 - expected to die with invalid character # start checks about incorrect dots in string ok 4 - expected to die with a lone dot ok 5 - expected to die with a lone dot 1..5 ok All tests successful. Files=1, Tests=5, 0 wallclock secs ( 0.06 usr + 0.03 sys = 0.09 CPU +) Result: PASS
    Fine! But.. to much repetitions in the test code. Are not we expected to be DRY (Dont Repeat Yourself)? Yes we are and since we have been so lazy to put use Test::More qw(no_plan) we can add a good loop of tests (replace the last two dies_ok with the following code in the test file):

    foreach my $string ( '1.2', '0..2,5.6,8', '1,2,.,3', '.' ){ dies_ok { Range::Validator::validate( $string ) } "expected to die with a lone dot in range [$string]"; }
    Run the test again:

    shell> prove -l -v ./t/01-validate.t ./t/01-validate.t .. ok 1 - use Range::Validator; ok 2 - ok valid string produces correct number of elements # Failed test 'expected to die with a lone dot in range [.]' # starting test of forbidden characters in the string form # at ./t/01-validate.t line 22. ok 3 - expected to die with invalid character # Looks like you failed 1 test of 7. # start checks about incorrect dots in string ok 4 - expected to die with a lone dot in range [1.2] ok 5 - expected to die with a lone dot in range [0..2,5.6,8] ok 6 - expected to die with a lone dot in range [1,2,.,3] not ok 7 - expected to die with a lone dot in range [.] 1..7 Dubious, test returned 1 (wstat 256, 0x100) Failed 1/7 subtests Test Summary Report ------------------- ./t/01-validate.t (Wstat: 256 Tests: 7 Failed: 1) Failed test: 7 Non-zero exit status: 1 Files=1, Tests=7, 0 wallclock secs ( 0.02 usr + 0.03 sys = 0.05 CPU +) Result: FAIL
    FAIL? Fortunately we shown the current range passed in the text generated by the test, so go examine it:

    not ok 7 - expected to die with a lone dot in range [.]

    Spot why it fails (ie it does not die as expected)? Our regex to check a lone dot is:

    /[^.]+\.{1}[^.]+/

    And it reads (as per the YAPE::Regex::Explain output): any character except: . (1 or more times (matching the most amount possible)) followed by . (1 times) followed by any character except: . (1 or more times (matching the most amount possible))

    Which is simply not true for the given string '.' So we try changing both plus signs with question mark quantifiers in the regex: it does not help. As a wise friend explains, we need lookaround: /(?<!\.)\.(?!\.)/ will work! So we change the check in the module like follow:

    # not allowed a lone . croak "invalid range [$range] (single .)!" if $range =~ /(?<!\.)\. +(?!\.)/;

    As we spot this edge case we add two similar ones to the test:

    foreach my $string ( '1.2', '0..2,5.6,8', '1,2,.,3', '.', '1.', '.1' ) +{ dies_ok { Range::Validator::validate( $string ) } "expected to die with a lone dot in range [$string]"; }
    Also the next regex (aimed to search for three dots) in the module is not working for the very same reason; change it from /[^.]+\.{3}/ to simply /\.{3}/

    The moral? Tests are your friends! We spot, by hazard, an edge case and our code must be able to deal with it, so free as much your fantasy writing your tests. Cockroaches come from box corners.. ops, no I mean: bugs come from edge case.

    Now we add some test to spot, and die, if three dots are found, with the new simpler regex /\.{3}/ so we change the code adding the following code to the test:

    foreach my $newstring ( '1...3', '1,3...5','...', '1...', '...2' ){ dies_ok { Range::Validator::validate( $newstring ) } "expected to die with three dots in range [$newstring]"; }
    We run the test:

    shell> prove -l -v ./t/01-validate.t ./t/01-validate.t .. ok 1 - use Range::Validator; ok 2 - ok valid string produces correct number of elements # starting test of forbidden characters in the string form ok 3 - expected to die with invalid character # start checks about incorrect dots in string ok 4 - expected to die with a lone dot in range [1.2] ok 5 - expected to die with a lone dot in range [0..2,5.6,8] ok 6 - expected to die with a lone dot in range [1,2,.,3] ok 7 - expected to die with a lone dot in range [.] ok 8 - expected to die with a lone dot in range [1.] ok 9 - expected to die with a lone dot in range [.1] ok 10 - expected to die with three dots in range [1...3] ok 11 - expected to die with three dots in range [1,3...5] ok 12 - expected to die with three dots in range [...] ok 13 - expected to die with three dots in range [1...] ok 14 - expected to die with three dots in range [...2] 1..14 ok All tests successful. Files=1, Tests=14, 1 wallclock secs ( 0.03 usr + 0.03 sys = 0.06 CP +U) Result: PASS
    So now your sub is:

    sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; } # otherwise we received a list else{ ... } # remove any space from string $range =~ s/\s+//g; # die if invalid characters croak "invalid character passed in string [$range]!" if $range =~ /[^\s,.\d]/; # not allowed a lone . croak "invalid range [$range] (single .)!" if $range =~ /(?<!\.)\. +(?!\.)/; # not allowed more than 2 . croak "invalid range [$range] (more than 2 .)!" if $range =~ /\.{3 +}/; @range = eval ($range); return @range; }
    And our test file ./t/01-validate.t as follow:

    #!perl use 5.006; use strict; use warnings; use Test::More qw(no_plan); use Test::Exception; use_ok( 'Range::Validator' ); ok (scalar Range::Validator::validate('0..2') == 3, 'ok valid string produces correct number of elements' ); note ("starting test of forbidden characters in the string form"); dies_ok { Range::Validator::validate('xxxinvalidstringxxx') } "expected to die with invalid character"; note ("start checks about incorrect dots in string"); foreach my $string ( '1.2', '0..2,5.6,8', '1,2,.,3', '.', '1.', '.1' ) +{ dies_ok { Range::Validator::validate( $string ) } "expected to die with a lone dot in range [$string]"; } foreach my $newstring ( '1...3', '1,3...5','...', '1...', '...2' ){ dies_ok { Range::Validator::validate( $newstring ) } "expected to die with three dots in range [$newstring]"; }

    step 5) git: a push for two commits

    Time to review the status of the local repository, commit changes and push it online:

    git-client> git status On branch master Changes not staged for commit: (use "git add &lt;file&gt;..." to update what will be committed) (use "git checkout -- &lt;file&gt;..." to discard changes in working + directory) modified: lib/Range/Validator.pm modified: t/01-validate.t no changes added to commit (use "git add" and/or "git commit -a") git-client> git commit -a -m "changed regexes for 2 o lone dot and rel +ative tests" [master 169809c] changed regexes for 2 o lone dot and relative tests 2 files changed, 17 insertions(+), 1 deletion(-) git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': Counting objects: 12, done. Delta compression using up to 4 threads. Compressing objects: 100% (8/8), done. Writing objects: 100% (12/12), 1.50 KiB | 385.00 KiB/s, done. Total 12 (delta 5), reused 0 (delta 0) remote: Resolving deltas: 100% (5/5), completed with 3 local objects. To https://github.com/YourGithubLogin/Range-Validator 5083ec3..169809c master -> master
    Today we committed twice, do you remember? first time just the POD we added for the sub and second time just few moments ago. We pushed just one time. What's really now in the online repository?

    Go to the online repository, Insights, Network: the last two dots on the line segment are our two commits, pushed together in a single push. Handy, no? Click on the second-last dot and you will see the detail of the commit concerning the POD, with lines we removed in red and lines we added in green. Commits are free: committing small changes frequently is better than commit a lot of changes all together.

    day five: deeper tests

    step 1) more validation in the code

    Today we plan to add two new validations in our sub: first one intended to be used against ranges passed in string form, the second to all ranges, before returning them.

    The constraint for the string form is about reversed ranges, like 3..1 and it is added just after the last croak we added yesterday:

    # spot reverse ranges like 27..5 if ($range =~ /[^.]\.\.[^.]/){ foreach my $match ( $range=~/(\d+\.\.\d+)/g ){ $match=~/(\d+)\.\.(\d+)/; croak "$1 > $2 in range [$range]" if $1 > $2; } }
    Now is important that we take the habit to commit on our own, whenever we add an atomic piece of code: so go to commit! From now on not every git operation will be shown with full output, only important ones (is this a git guide? No!).

    The other one, applied before returning the range as array, is about overlapping ranges: (0..2,1) that is equivalent to 0,1,1,2 with a nasty repetition terrible for the rest of the code outside the present module (this assumption is related to our current, fictional, scenario). So, just before returning from the sub, we simply use a hash to have unique elements in the resulting array:

    # eval the range @range = eval ($range); # remove duplicate elements using a hash my %single = map{ $_ => 1} @range; # -- new line # sort unique keys numerically @range = sort{ $a &lt;=&gt; $b } keys %single; # -- new line return @range;
    As previously said, commit on your own, with a meaningful comment.

    You end with:

    sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; } # otherwise we received a list else{ ... } # remove any space from string $range =~ s/\s+//g; # die if invalid characters croak "invalid character passed in string [$range]!" if $range =~ /[^\s,.\d]/; # not allowed a lone . croak "invalid range [$range] (single .)!" if $range =~ /(?<!\.)\. +(?!\.)/; # not allowed more than 2 . croak "invalid range [$range] (more than 2 .)!" if $range =~ /\.{3 +}/; # spot reverse ranges like 27..5 if ($range =~ /[^.]\.\.[^.]/){ foreach my $match ( $range=~/(\d+\.\.\d+)/g ){ $match=~/(\d+)\.\.(\d+)/; croak "$1 > $2 in range [$range]" if $1 > $2; } } # eval the range @range = eval ($range); # remove duplicate elements using a hash my %single = map{ $_ => 1} @range; # sort unique keys numerically @range = sort{ $a &lt;=&gt; $b } keys %single; return @range; }
    New features are worth to be pushed on the online repository: you know how can be done. Do it.

    step 2) a git excursus

    Did you follow my small advices about git committing and meaningful messages? If so it's time to see why is better to be diligent: with git log (which man page is probably longer than this guide..) you can review a lot about previous activities:

    git-client> git log HEAD --oneline bb952ee (HEAD -> master, YourGithubLogin/master) removing duplicates f +rom overlapping ranges 15a5f63 check for reverse ranges in string form 169809c changed regexes for 2 o lone dot and relative tests a6dc557 initial POD to document validate function 5083ec3 added 01-validate.t 580f628 some code into validate, added 01-validate.t and modified Make +file.PL 49a0690 moved POD, removed -T 1788c12 module-starter created content
    This is definitevely handy. HEAD is where your activity is focused in this moment. Try to remove the --oneline switch to see also all users and dates of each commit.

    As you can understand git is a vaste world: explore it to suit your needs. This is not a git guide ;)

    step 3) add deeper tests

    Until now we used a limited test arsenal: ok from Test::Simple use_ok and note from Test::More dies_ok from Test::Exception

    As we added a croak in our sub to prevent reversed ranges, we can use dies_ok again to check such situations (append the following code to our test file):

    foreach my $reversed ('3..1,7..9','1..4,7..5','3..4, 7..5','0..2,27..5 +'){ dies_ok { Range::Validator::validate( $reversed ) } "expected to d +ie with reverse range [$reversed]"; }
    Commit at your will.

    Test::More has a lot of useful testing facilities (review the module documentation to take inspiration) and now we will use is_deeply to implement some positive test about expected and returned data structures.

    This is useful, in our case, to test that overlapping ranges or unordered ones are returned corrected. To do this we can use a hash of inputs and their expected returned values (append the following code to our test file):

    my %test = ( '1,1..3' => [(1,2,3)], '1,2..5,4' => [(1,2,3,4,5)], '1..5,3' => [(1,2,3,4,5)], '8,9,1..2' => [(1,2,8,9)], '1..3,3,5..7' => [(1,2,3,5,6,7)], '5..7,1..6' => [(1,2,3,4,5,6,7)], '0..5,3' => [(0,1,2,3,4,5)] ); # ranges, even if overlapped or unordered, return the correct array foreach my $range ( keys %test ){ my @res = Range::Validator::validate($range); is_deeply( $test{$range},\@res, "correct result for range [$range]" ); }
    Last two tests we added will produce the following output:

    ok 15 - expected to die with reverse range [3..1,7..9] ok 16 - expected to die with reverse range [1..4,7..5] ok 17 - expected to die with reverse range [3..4, 7..5] ok 18 - expected to die with reverse range [0..2,27..5] ok 19 - correct result for range [1,2..5,4] ok 20 - correct result for range [1,1..3] ok 21 - correct result for range [0..5,3] ok 22 - correct result for range [5..7,1..6] ok 23 - correct result for range [1..3,3,5..7] ok 24 - correct result for range [8,9,1..2] ok 25 - correct result for range [1..5,3]
    Commit.

    step 4) who is ahead? git branches and log

    Just at glance: up to you to explore this topic. Look at the following git session of commands, two commands ( one that we never used until now) just before and reissued just after the push:

    git-client> git log HEAD --oneline c3f8d5b (HEAD -> master) test for overlappped or unordered ranges f16789a test for reversed ranges bb952ee (YourGithubLogin/master) removing duplicates from overlapping +ranges 15a5f63 check for reverse ranges in string form 169809c changed regexes for 2 o lone dot and relative tests a6dc557 initial POD to document validate function 5083ec3 added 01-validate.t 580f628 some code into validate, added 01-validate.t and modified Make +file.PL 49a0690 moved POD, removed -T 1788c12 module-starter created content git-client> git show-branch *master * [master] test for overlappped or unordered ranges ! [refs/remotes/YourGithubLogin/master] removing duplicates from over +lapping ranges -- * [master] test for overlappped or unordered ranges * [master^] test for reversed ranges *+ [refs/remotes/YourGithubLogin/master] removing duplicates from over +lapping ranges git-client> git push YourGithubLogin master fatal: HttpRequestException encountered. Username for 'https://github.com': YourGithubLogin Password for 'https://YourGithubLogin@github.com': Counting objects: 8, done. Delta compression using up to 4 threads. Compressing objects: 100% (8/8), done. Writing objects: 100% (8/8), 1011 bytes | 505.00 KiB/s, done. Total 8 (delta 6), reused 0 (delta 0) remote: Resolving deltas: 100% (6/6), completed with 3 local objects. To https://github.com/YourGithubLogin/Range-Validator bb952ee..c3f8d5b master -> master git-client> git log HEAD --oneline c3f8d5b (HEAD -> master, YourGithubLogin/master) test for overlappped +or unordered ranges f16789a test for reversed ranges bb952ee removing duplicates from overlapping ranges 15a5f63 check for reverse ranges in string form 169809c changed regexes for 2 o lone dot and relative tests a6dc557 initial POD to document validate function 5083ec3 added 01-validate.t 580f628 some code into validate, added 01-validate.t and modified Make +file.PL 49a0690 moved POD, removed -T 1788c12 module-starter created content git-client> git show-branch *master * [master] test for overlappped or unordered ranges ! [refs/remotes/YourGithubLogin/master] test for overlappped or unord +ered ranges -- *+ [master] test for overlappped or unordered ranges

    step 5) overall check and release

    We just need some small change and our module will be ready for production. We left some part of our code behind and precisely the else part dedicated to incoming arrays:

    # otherwise we received a list else{ ... }
    and we can just fill it with @range = @_; and commit the change.

    But if the above is true, we have to move all string check inside the if ( @_ == 1) {... block! Do it and commit the change. Now our sub is like the following one:

    sub validate{ my $range; my @range; # assume we have a string if we receive only one argument if ( @_ == 1){ $range = $_[0]; # remove any space from string $range =~ s/\s+//g; # die if invalid characters croak "invalid character passed in string [$range]!" if $range =~ /[^\s,.\d]/; # not allowed a lone . croak "invalid range [$range] (single .)!" if $range =~ /(?<!\ +.)\.(?!\.)/; # not allowed more than 2 . croak "invalid range [$range] (more than 2 .)!" if $range =~ / +\.{3}/; # spot reverse ranges like 27..5 if ($range =~ /[^.]\.\.[^.]/){ foreach my $match ( $range=~/(\d+\.\.\d+)/g ){ $match=~/(\d+)\.\.(\d+)/; croak "$1 > $2 in range [$range]" if $1 > $2; } } # eval the range @range = eval ($range); } # otherwise we received a list else{ @range = @_; } # remove duplicate elements using a hash my %single = map{ $_ => 1} @range; # sort unique keys numerically @range = sort{ $a &lt;=&gt; $b } keys %single; return @range; }
    Is our change safe? Well we have a test suit: prove -l -v will tell you if the change impacts the test suit (if tests are poor you can never be sure).

    Now our module is ready for production. It just lacks of some good documentation. Not a big deal, but is our duty to document what the sub does effectively.

    Add to the POD of our sub:

    Every string with occurences of a lone dot or more than two dots will +be rejected causing an exception in the calling program. Reverse ranges like in <code>'3..1'
    passed as string will also cause an exception.

    In both string and list form any duplicate element (overlapped range) will be silently removed. Any form of unordered list will be silently rerodered.

    </code> Check git status and commit. Use git log HEAD --oneline to see that the local repository is three steps ahead of the remote one. Push the changes in the online repository. Use git log HEAD --oneline again to see what happened.

    step 6) test list form

    Even if it is simpler we have to test the array form of our sub. We can use this time an array of tests each element being another array with two elements: first the list we pass to the sub, then the list we expect back from the sub. Again using is_deeply is a good choice:

    Add the following test to our file 01-validate.t

    note ("starting test of list form"); my @test = ( # passed expected # correct ones [ [(0..3)], [(0,1,2,3)] ], [ [(0,1..3)], [(0,1,2,3)] ], [ [(0..3,5)], [(0,1,2,3,5)] ], # overlapped ones [ [(0..3,2)], [(0,1,2,3)] ], [ [(1,0..3)], [(0,1,2,3)] ], [ [(0..3,1..2)], [(0,1,2,3)] ], ); foreach my $list ( @test ){ my @res = Range::Validator::validate( @{$list->[0]} ); is_deeply( \@{$list->[1]},\@res, "correct result for list: @{$list->[0]}" ); }
    Run the test: we reached the big number of 32 succesful test! Congratulations!

    As always, commit the change with a meaningful comment and push this important set of changes to the online repository.

    day six: testing STDERR

    step 1) the problem of empty lists

    Our assumptions, in "day zero - the plan", were to accept only ordered, not overlapped lists or their string representations. Other software in the project (again: a fictional scenario), where our validation module is used, blindly pass what received from outside (many different sources) to our validate sub. With the output produced by our sub many other subs or methods are called. All these software (out of our control) assume that, if an empty list is received then ALL elements are processed. This seemed the right thing to do. After the advent of our module some example usage can be:

    # all actions performed, no need to call our validate sub actions_to_activate_account(); # only reset password and send mail with new password my @valid_range = Range::Validator::validate(3,12); actions_to_activate_account( @valid_range ); # or in the string form: my $action_string = get_action_string_from_DB( actions => 'reset_pwd' +); # $action_string is '3,12' my @valid_range = Range::Validator::validate( $action_string ); actions_to_activate_account( @valid_range ); # or in the array form: my @actions = get_action_list_from_DB( actions => 'reset_pwd' ); # @actions is (3,12) my @valid_range = Range::Validator::validate( @actions ); actions_to_activate_account( @valid_range );
    Right? The module goes in production and 98% of errors from the foreign part of the code base disappeared. Only 98%? Yes..

    Miss A of department Z call your boss in a berserk state: not all their errors are gone away. They use the list form but Miss A and the developer B are sure no empty lists are passed to your validate sub. You call the developer B, a good fellow, who explain you that list are generated from a database field that cannot be empty (NOT NULL constraint in the database):

    You - Listen B, if I emit a warning you'll be able to trap which list generated from the database provoked it?

    B - Sure! Can you add this?

    You - Yes, for sure. I can use a variable in Range::Validator namespace, let's name it warnings and you'll set it to a true value and only you, and not the rest of the company, will see errors on STDERR. Ok?

    B - Fantastic! I'll add the variable as soon as you tell me.

    You - Ok, but then I want to know which list provoked the error, right? For a coffee?

    B - Yeah man, for a coffee, as always.

    step 2) adding a Carp to the lake

    So we add a line in the top of the module, just after VERSION: our $WARNINGS = 0; to let dev B to trigger our warnings. We commit even this small change.

    Then we add to the sub a carp call triggered if our $WARNINGS == 1; and if @_ == 0 and we add this as elsif condition:

    # assume we have a string if we receive only one argument if ( @_ == 1){ # STRING PART ... } elsif ( $WARNINGS == 1 and @_ == 0 ){ carp "Empty list passed in! We assume all element will be proc +essed."; } # otherwise we received a list else{ # NON EMPTY LIST PART ... }
    Git status, git commit on your own.

    step 3) prepare the fishing road: add a dependency for our test

    To grab STDERR in test we have to add a dependency to Capture::Tiny module which is able, with its method capture to catch STDOUT STDERR and results emitted by an external command or a chunk of perl code. Handy and tiny module.

    Do you remeber the place to specify a dependency? Bravo! Is in Makefile.PL and we did the same in "day three step 3" when we added two modules to the BUILD_REQUIRES hash. Now we add Capture::Tiny to this part (remeber to specify module name in a quoted string):

    BUILD_REQUIRES => { 'Test::More' => '0', 'Test::Exception' => '0', 'Capture::Tiny' => '0', # -- new line },
    Commit this change.

    step 4) go fishing the Carp in our test

    Now in 01-validate.t test file we first add the module with use Capture::Tiny qw(capture) and then, at the end we add some test of the warning behaviour:

    note ("test of warnings emitted"); { local $Range::Validator::WARNINGS; my ($stdout, $stderr, @result) = capture { Range::Validator::valid +ate() }; unlike($stderr, qr/^Empty list passed in/, "no warning for empty l +ist unless \$Range::Validator::WARNINGS"); $Range::Validator::WARNINGS = 1; ($stdout, $stderr, @result) = capture { Range::Validator::validate +() }; like( $stderr, qr/^Empty list passed in/, "right warning for empty + list if \$Range::Validator::WARNINGS"); }
    Run the test suit, commit this change. Use git log HEAD --oneline to visualize our progresses and push all recent commits to the online repository.

    The new version goes in production. The good fellow calls you:

    B - Ehy, we added your warnings..

    You - And..?

    B - We spotted our errors in the database..

    You - And which kind of error?

    B - Well.. do you know what perl sees when it spot 1, followed by FOUR dots, followed by 3?

    You - Ahh aha ah ah.. unbelievable! Yes, I suppose it parses as: from 1 to .3 aka nothing, aka empty list..

    B - Exactly! Can you imagine my boss face?

    You - I dont want! A coffee is waiting for you. Thanks!

    step 5) document the new warning feature

    Add some POD,few lines are better than nothing, to the module documentation:

    =head1 ENABLE WARNINGS If the <code>$Range::Validator::WARNINGS
    is set to a true value then an empty list passed to validate will provoke a warning from the caller perspective.

    </code> Commit this change and update the online repository.

    day seven: the module is done but not ready

    step 1) sharing

    As stated in "day zero - the plan" sharing early is a good principle: can be worth to ask in a forum dedicated to Perl (like perlmonks.org) posting a RFC post (Request For Comments) or using the dedicated website http://prepan.org/ to collect suggestions about your module idea and implementation.

    step 2) files in a CPAN distribution

    Your module is ready to be used and it is already used, but is not installable by a CPAN client nor can be indexed by a CPAN indexer at the moment. Read the short but complete description of possible files at What are the files in a CPAN distribution?

    Following tests are not needed to install or use your module but to help you spotting what can be wrong in your distribution.

    step 3) another kind of test: MANIFEST

    In the the "day one - preparing the ground" we used module-starter to create the base of our module. Under the /t folder the program put three test we did not seen until now: manifest.t pod-coverage.t and pod.t

    These three tests are here for us and they will help us to check our module distribution is complete. Let's start from the first

    shell> prove ./t/manifest.t ./t/manifest.t .. skipped: Author tests not required for installation Files=1, Tests=0, 0 wallclock secs ( 0.03 usr + 0.01 sys = 0.05 CPU +) Result: NOTESTS
    Ok, no test run, just skipped. Go to view what is inside the test: it skips all actions unless RELEASE_TESTING environment variable is set. It also will complain unless a minimal version of Test::CheckManifest is installed. So set this variable in the shell (how to do this depends on your operating system: linux users probably need export RELEASE_TESTING=1 while windows ones will use set  RELEASE_TESTING=1) and use your CPAN client to install the required module (normally cpan Test::CheckManifest is all you need) and rerun the test again:

    shell> prove ./t/manifest.t ./t/manifest.t .. # Failed test at ./t/manifest.t line 15. ./t/manifest.t .. 1/1 # got: 0 # expected: 1 # The following files are not named in the MANIFEST file: .... # MANY + LINES MORE..
    Omg?! What is all that output? The test complains about a lot of files that are present in the filesystem, in our module folder but are not specified in the MANIFEST file. This file contains a list (one per line) of files contained within the tarball of your module.

    In the above output we have seen a lot, if not all, files under the .git directory. Obviously we do not want them included in our module distribution. How can we skip them? Using MANIFEST.SKIP file that basically contains regular expressions describing which files should be excluded from the distribution.

    So go create this file in the main folder of the module and add inside it a line with a regex saying we do not want the .git directory: ^\.git\/ and add this file with git add MANIFEST.SKIP and commit this important change.

    Rerun the test (added some newlines for readability):

    shell> prove ./t/manifest.t ./t/manifest.t .. 1/1 # Failed test at ./t/manifest.t line 15. # got: 0 # expected: 1 # The following files are not named in the MANIFEST file: /path/to/your/module/ignore.txt, /path/to/your/module/MANIFEST.SKIP, /path/to/your/module/t/01-validate.t, /path/to/your/module/xt/boilerplate.t ...
    By far better: the test points us to two files we for sure need to include in MANIFEST and precisely: MANIFEST.SKIP and t/01-validate.t

    Go to the MANIFEST file and add them (where they are appropriate, near similar files and paying attention to case an paths), then commit the change. If you rerun the above test you'll see files added to MANIFEST are no more present in the failure output.

    Let's examine the remaining two files. What is ignore.txt? It was created as default ignore list by module-starter and it contains many lines of regexes. If we want module-starter to create MANIFEST.SKIP instead, next time we'll use it specify --ignores='manifest' For the moment we can delete it. Commit.

    If you rerun the test you now see only /xt/boilerplate.t and, if you open it, you'll see that is just checking if you left some default text in our module, texts put by module-starter Ah! useful test: let's run it:

    shell> prove ./xt/boilerplate.t ./xt/boilerplate.t .. ok All tests successful. Test Summary Report ------------------- ./xt/boilerplate.t (Wstat: 0 Tests: 3 Failed: 0) TODO passed: 3 Files=1, Tests=3, 1 wallclock secs ( 0.03 usr + 0.03 sys = 0.06 CPU +) Result: PASS
    Ok no boilerplate garbage left. We can delete this test file and commit the change.

    Now, finally:

    shell> prove ./t/manifest.t ./t/manifest.t .. ok All tests successful. Files=1, Tests=1, 1 wallclock secs ( 0.02 usr + 0.03 sys = 0.05 CPU +) Result: PASS
    Push recent changes into the online repository.

    step 4) another kind of test: POD and POD coverage

    In our /t folder we still have two tests we did not run: shame! module-starter created for us pod.t and pod-coverage.t The first one checks every POD in our distribution has no errors and the second ensures that all relevant files in your distribution are appropriately documented in POD documentation. Thanks for this. Run them:

    shell> prove -l -v ./t/pod.t ./t/pod.t .. 1..1 ok 1 - POD test for lib/Range/Validator.pm ok All tests successful. Files=1, Tests=1, 0 wallclock secs ( 0.03 usr + 0.02 sys = 0.05 CPU +) Result: PASS shell> prove -l -v ./t/pod-coverage.t ./t/pod-coverage.t .. 1..1 ok 1 - Pod coverage on Range::Validator ok All tests successful. Files=1, Tests=1, 0 wallclock secs ( 0.03 usr + 0.02 sys = 0.05 CPU +) Result: PASS

    step 5) some README and final review of the work

    The README must contain some general information about the module. Users can read this file via cpan client so put a minimal description in it. Gihub website use it as default page, so it is useful have some meningful text. Someone generates the text from the POD section of the module. Put a short description, maybe the sysnopsis and commit the change. Push it online.

    Now we can proudly look at our commits history in a --reverse order:

    git-client> git log HEAD --oneline --reverse 1788c12 module-starter created content 49a0690 moved POD, removed -T 580f628 some code into validate, added 01-validate.t and modified Make +file.PL 5083ec3 added 01-validate.t a6dc557 initial POD to document validate function 169809c changed regexes for 2 o lone dot and relative tests 15a5f63 check for reverse ranges in string form bb952ee removing duplicates from overlapping ranges f16789a test for reversed ranges c3f8d5b test for overlappped or unordered ranges 89174fe in the else block @range = @_ 58dbb12 moved string checks into the if (@_ == 1) block 8697b87 added POD for all string and list cecks e4f8eb1 added tests for lists 3efd7ce added $WARNING = 0 a46d6fc elsif block to catch empty @_ and carping under request 3e5993d Capture::Tiny in Makefile.pl ac22e82 test for warnings emitted 9667e22 POD for warnings 13f53eb MANIFEST.SKIP first line 99ab999 MANIFEST added MANIFEST.SKIP and 01-validate.t 61b7c9f removed ignore.txt e3feb61 removed xt/boilerplate.t 3c0da4f (HEAD -> master, YourGithubLogin/master) modified README
    A good glance of two dozens of commits! We have done a good job, even if with some errors: committing multiple changes in different part of the project (like in our third commit) is not wise: better atomical commits. We have also some typo in commit messages..

    step 6) try a real CPAN client installation

    It's now time to see if our module can be installaed by a cpan client. Nothing easier: if you are in the module folder just run cpan . and enjoy the output (note that this command will modify the content of the directory!).

    day eight: other module techniques

    option one - the bare bone module

    This is option we choosed for the above example and, even if it is the less favorable one, we used this form for the extreme easy. The module is just a container of subs and all subs are available in the program tha uses our module but only using their fully qualified name, ie including the name space where they are defined: Range::Validator::validate was the syntax we used all over the tutorial.

    Nothing bad if the above behaviour is all you need.

    option two - the Exporter module

    If you need more control over what to be available to the end user of your module Exporter CORE module will be a better approach.

    Go read the module documentation to have an idea of its usage.

    You can leverage what to export into the program using your module, so no more fully qualified package name will be needed. I suggest you to no export nothing by default (ie leaving @EXPORT empty) using instead @EXPORT_OK to let the end user to import sub from your module on, explicit, request.

    With the use of Exporter you can also export variables into the program using your module, not only subs. It's up to you to decide if this is the right thing to do. Pay attention with names you use and the risk of name collision: what will happen if two module export two function with the same name?

    Perl is not restrictive in any meaning of the word: nothing will prevent the end user of your module to call Your::Module::not_exported_at_all_sub() and access its functionality. A fully qualified name will be always available. The end user is breaking the API you provide, API where not_exported_at_all_sub is not even mentioned.

    option three - the OO module

    Preferred by many is the Object Oriented (OO) way. OO it's not better nor worst: it's a matter of aptitude or a matter of needs. See the relevant section on the core documentation: To-OO-or-not-OO? about the choice.

    An object is just a little data structure that knows the class (the package) it belongs to. Nothing more complex than this. The data structure is generally a hash and its consciouness of its class (package) is provided by the bless core function.

    Your API will just provide a constructor (conventionally new) and a serie of methods this object can use.

    Again: nothing prevents end user to call one of your function by its fully qualified name as in Your::Module::_not_exported_at_all_sub() it's just matter of being polite. The core documentation include some tutorial about objects:

    perlobj

    perlootut

    Many perl authors nowadays use specialized modules to build up OO projects: Moose module or the lighter flavor one Moo notably.

    An OO module has many advantages in particular situations but is generally a bit slower than other module techniques.

    advanced Makefile.PL usage

    Until now we modified the BUILD_REQUIRES to specify dependencies needed while testing our module and PREREQ_PM to include modules needed by our module to effectively run.

    The file format is described in the documentation of ExtUtils::MakeMaker where is stated that, since version 6.64 it is available another field: <code>TEST_REQUIRES</code> defined as: "A hash of modules that are needed to test your module but not run or build it". This is exactly what we need, but this force us to specify, also in Makefile.PL that we need 'ExtUtils::MakeMaker'  => '6.64' in the CONFIGURE_REQUIRES hash.

    The 6.64 version of ExtUtils::MakeMaker was released in 2012 but you cannot be sure end users have some modern perl, so we can safely use BUILD_REQUIRES as always or use some logic to fallback to "older" functionality if ExtUtils::MakeMaker is too old. You can use WriteMakefile1 sub used in the Makefile.PL of App::EUMM::Upgrade

    other testing modules

    In the current tutorial we used Test::Exception to test failures: consider also Test::Fatal

    Overkill for simple test case but useful for complex one is the module Test::Class

    Other modules worth to see are in the Task::Kensho list.

    advanced testing code

    If in your tests you have the risk of code repetition (against the DRY - Dont Repeat Yourself principle) you can find handy to have a module only used in your tests, a module under the /t folder.

    You need some precautions, though.

    Let's assume you plan a test helper module named testhelper contained in the /t/testhelper.pm file used by many of your test files.

    You dont want CPAN to index your testhelper module and, to do so, you can put a no_index directive into your main META.yml or META.json file (or both). You can also use a trick in your package definition inserting a newline between package declaration and the name of the package. Like in:

    package # hide from CPAN indexer testhelper;

    In all tests you want to use that helper module you must:

    use lib '.'; use t::testhelper;

    bibliography

    CORE documentation about modules

    CORE documentation about testing

    • Test
    • Simple
    • More
    • prove
    • other CORE modules under the Test:: namespace
    • modules under the TAP:: namespace

    further readings about modules

    further readings about testing

    acknowledgements

    As all my works the present tutorial would be not possible without the help of the perlmonks.org community. Not being an exhaustive list I want to thanks: Corion, choroba, Tux, 1nickt, marto, hippo, haukex, Eily, eyepopslikeamosquito, davido and kschwab (to be updated ;)

      Absolutely brilliant.

      If you could flesh it out a bit more about how to release to CPAN it would be the one-stop resource.

      Hi to anyone out there: If you haven't installed Test::CheckManifest yet .. as of this moment I can only advise you to install version 1.38 as succeeding version/s have a bug that doesn't recognise that entry in MANIFEST.SKIP file for exempting the folder .git in the manifest consideration.
Re: [RFC] Discipulus's step by step tutorial on module creation with tests and git
by bliako (Monsignor) on Dec 19, 2018 at 13:05 UTC

    Thanks for this Discipulus.

    How to force author tests, e.g. prove -l -t t/manifest.t tells me that it is skipped as author tests.

Re: [RFC] Discipulus's step by step tutorial on module creation with tests and git
by cbeckley (Curate) on Dec 19, 2018 at 22:16 UTC

    Wow. ++ is simply inadequate.


    Thank you

      Your statement is confusing to me cbeckley, first you ++, then you say "inadequate". Is that just a loss in translation?

      I perused the entire postings briefly before my meetings this morning, and I have to say that I love the excitement Discipulus shows for the growth of the test world.

      Not down or up-voting your post here, but just want to understand what you mean.

      "Inadequate" means 'not good enough' (my understanding).

      "Adequate" means 'good enough' (my understanding).

        What he is saying (at least this is how I understand it) is that a simple "++" is inadequate to express his appreciation.
Re: [RFC] Discipulus's step by step tutorial on module creation with tests and git
by Laurent_R (Canon) on Dec 19, 2018 at 18:57 UTC
    Thank you very much, Discipulus, for this, in my name and in the name of the community (although, of course, I have no credential to speak in the name of anyone else than myself, other than being part of that community).

    I haven't had time yet to go through this (long) pieces, but, from a quick look, it really looks great and useful.

Re: [RFC] Discipulus's step by step tutorial on module creation with tests and git
by kcott (Archbishop) on Dec 20, 2018 at 08:36 UTC

    G'day Discipulus,

    ++ Worthy of front-paging but unfortunately it's too long (unless there's some way around that which I don't know about).

    — Ken

      <readmore>

        Well, thanks, I suppose, if you thought I didn't know about <readmore>; however, I do.

        My settings automatically open <readmore> sections of an OP, so they're not all that obvious to me. Accordingly, I checked for that before deciding not to front-page.

        My "... some way around that which I don't know about ..." referred to front-paging in a way that presented the post in a shortened form for the Monastery Gates. I'm pretty sure that's not possible but would be happy to learn about it if such a facility existed.

        Update: Discipulus has now added a <readmore> tag. I have front-paged his post.

        — Ken

Re: [RFC] Discipulus's step by step tutorial on module creation with tests and git
by RonW (Parson) on Jan 03, 2019 at 01:04 UTC

    Excellent guide.

    I would recommend that the guide be VCS agnostic rather than git-centric. This is, after all, about Perl modules.

      Thanks RonW for the compliment,

      you know: when you arrive late to a technology you grab the latest. I was so late in VCS that I found only git ;)

      Kiddings apart, I found git so widely used by monks and perl authors (many fork me on github links on metacpan) that I supposed to include the bare minimum a perl module developer must know in 2018.. well in 2019.

      I have practiced only git, and just a little, while writing my tutorial and I tried to make the git part as little as possible. I found only longish boring tutorials about git and I missed a short, minimalist approach as I tried to illustrate.

      L*

      There are no rules, there are no thumbs..
      Reinvent the wheel, then learn The Wheel; may be one day you reinvent one of THE WHEELS.
        I found only longish boring tutorials about git and I missed a short, minimalist approach as I tried to illustrate.

        I wonder if a set of minimalist tutorials - "Version Control for Perl Developers" - would be a worthy mini-project.

        I'd be willing to write the introduction and mini-tutorials for SVN and Fossil.

        Looks like you've got the material for a git mini-tutorial.

        (I see a CVS tutorial by trs80, so there is precedent for this.)

        Perhaps one tutorial with sections for the intro and the VCSs (the section for CVS would just link to the existing tutorial).