|Perl: the Markov chain saw|
Perl Version Change - Detecting Problems in Advanceby Roger_B (Scribe)
|on Sep 21, 2005 at 17:02 UTC||Need Help??|
Roger_B has asked for the
wisdom of the Perl Monks concerning the following question:
I am upgrading Perl from version 5.005_03 to version 5.8.x, and I want to verify that there are no issues in our existing scripts caused by changes to Perl's syntax.
I know that the incompatibilities are few, and everything will probably 'just work'. I know I can use 'perl -c' to verify that the files compile.
This leaves the possibility of a few cases where the Perl script will compile, but it will do something different! I would like to do what I can to capture such issues in advance, in case there are any holes in our test scripts.
I was hoping that this might be a problem that has been addressed before (it seemed a logical requirement to me), however I have had no success in Perl FAQs and in Google.
The only lead I have found so far is from ChatterBox, where the PPI module was suggested. This could certainly be used, but I wonder if anyone has done something like this already, either with or without this module?
The '-w' flag and the 'use strict;' pragma are not used in some Perl scripts. While I would like to see these added throughout, this is not likely to happen. My feeling is that this increases the chances of issues existing, while decreasing the chances of catching them.
Update 21 Sep 2005
Thanks all for your responses to date. Keep them coming! Here are some observations:
test coverage: Agreed, we do need good test coverage. Improving our coverage is an ongoing goal here, although not specifically my responsibility. The Perl code is distributed throughout the application, so providing better coverage specifically for Perl as a separate task isn't really practical. As the scripts interact in various ways with other code in C, Java and shell scripts, I don't think we'll manage to get around this with Perl automated tests.
As far as I am aware, 100% coverage of all possible permutations is impossible and we need to consider cost; the cost of going from 95% to 99% might be too high to justify for example.
I see these two methods as complementary. A search for incompatible code searches 100% of the code with less than 100% efficiency; a good test suite covers less than 100% of the code with near to 100% efficiency. Thus while the overlap ought to be substantial, each covers areas the other may miss.
strict & warnings pragmas: Personally, I never write a Perl program of more than one line without these pragmas. I am familiar with them and completely sold on their use (Of course in 5.005_03 I have to use '-w' as the warnings module doesn't exist, but that will change when we upgrade). Unfortunately, that attitude is not universal here. While I will suggest they are added as part of the upgrade, the budget for this is not guaranteed, so I want to cover the case that scripts exist without.
Note that, while actually adding the pragmas is simple, there will be a significant cost associated with finding and applying appropriate fixes for the errors and warnings they produce. I am thinking particularly of variable scope and possibly barewords. Does anyone have any thoughts on the effort this might require? If I can confidently provide a low estimate, the chances of having it approved increase!
perl -c: agreed, this won't pick up anything our tests wouldn't, but they would be picked up earlier, reducing costs. I think it's worthwhile on that basis.
perldeltas: I have looked briefly at these; that's what prompted me to write this message! Thanks for the pointer.
Perl Medic: Excellent pointer. Thanks, I'll look into that.
threads and unicode: Agreed, these are two of the problem areas. I am quite confident these will not be used. I have grepped case-independently for 'thread', 'locale' and 'utf' and found no occurrences. Are there any other simple searches I should consider to confirm this?
Thanks again for all your input.