|There's more than one way to do things|
Edit: Due to a discussion in #moose today i found that i need to do some thinking on whether i want metrics/targets to be objects or classes. Log of the discussion can be found here: github Input on that would be very appreciated.
I finished a usable release of this module today that is in what i consider decently CPAN-ready shape: Code::Statistics. The main point that's missing right now is the documentation, but as I am not sure which parts of the API i'll keep like that, I'm holding off on going into great detail there.
What I'm looking for is quite simply any comments, complaints, suggestions, etc. on what is wrong with the module, code or API as it is now, what could be added, changed, etc. I am looking for as harsh criticism as possible (as long as it is justified).
Now, about the dist itself: In the recent past i found myself tasked with refactoring old code quite often. Each time i wound up wondering if there was some sort of analysis tool that would help me pinpoint possibly problematic hot spots. Examples:
Since there was none, i began poking around with PPI to see what i could do and i found it to be extremely suited for the task. A short proof of concept script was done in 2 nights, and two weeks later, now, i'm done laying the groundwork for a framework.
Why framework? Looking at PPI i realized that there is a LOT of possible metrics that can be extracted with it. More combinations than i could possibly cover or even think up on my own. As such, the design was as follows:
The collector part looks for possible perl code files. It then looks for a list of targets* inside that file, then calculates all compatible metrics** for each target. The resulting data is then written whole-sale to a json file. The reporter part then picks that file up and summarizes the contents and prints pretty ascii tables.
* examples: PPI::Structure::Block, PPI::Statement::Sub, PPI::Document
I've tried to make it as customizable as possible (though i skimped a bit on the reporter, wanting to get a first version out before i lock myself too much in re APIs). Configuration is perltidy style, with a global ~/ config file, a possible local config file as well as the command line parameters being meshed and the ability to define profiles in the config files. Similarly the targets are done in such a way that each target is merely a class that accepts a Code::Statistics::File object and returns a list of targets, with the processing being contained entirely inside the target module. This way new targets can be easily added by myself or other coders. Metric collection is done in the same way. Additionally there is logic in place for each metric or target to defined compatible or incompatible counterparts.
Along the way I've tried to follow the philosophy of making it as customizable as possible, while still providing sane defaults; so the user can change a lot of things, but also just run it without any change and get something useful. An example for this would be that the user has the ability to provide a list of targets and metrics they are interested in, but lacking that, Module::Pluggable is used to load all targets and metrics present on the system.
I hope I've been sufficiently informative for the moment. Feel free to ask any questions on which i would need to elaborate more.
And again, any sort of input is appreciated. I am simply trying to get enough input to make this as useful for the community as a whole.