As far as I know, there is no way to reliably automate this process. My experience with automatic code reformatters and analyzers is - to date - rather disappointing. But maybe I'm wrong.
in reply to Finding duplicated code in Perl
You could think about building up a generalized library for your area of interest and to build by hand each of the scripts so that it's generally built of a few library calls.
Something I would do on the other side is that it's probably way too messy to test and document all existing code; and even if you don't know for sure, you can reasonably assume most of what existing scripts do is correct. Therefore I'd:
This way you'll:
- Write the new scripts as simply and fast as possible
- Run them on the very same production cases old scripts are run upon
- Find an automated way to keep track of script executions, number of times the scripts executed correctly and number of times, inputs and outputs of the times the behavoiur was different
At the point you are now, it's the right time to implement something like this. It won't cost so much now but will be very useful in the future.
- save a lot of time testing
- spot a lot of errors in both old and new scripts, not only in coding but more interestingly in the functional behaviour
- create test cases (better: automatic test cases) for existing stuff, so you'll know if something modified here breaks something there