|Pathologically Eclectic Rubbish Lister|
Differential profilingby chrestomanci (Priest)
|on Jun 26, 2013 at 15:53 UTC||Need Help??|
chrestomanci has asked for the
wisdom of the Perl Monks concerning the following question:
I have a fairly complex standalone script that I would like to make faster. I have already profiled it using Devel::NYTProf, and using the profiling results I have found and fixed a number of hotspots in the code.
I am now at a stage where there are some tuning parameters to adjust to try to get best performance. For example, on my DBIx::Class queries, which related resultsets should I pre-fetch.
At the moment I am trying to find the best settings for such parameters by doing multiple test runs with the profiler enabled, but with different settings for the parameters, and then comparing the html reports visually
The process works to show me large differences, such as a function taking 10 seconds under one set of settings, but only 5 under another, but the whole thing feels rather crude. For example there is no easy way to tell how time saved in one function has been moved to more time spent in other functions.
I have also considered using the Benchmark module to compare related runs, but it does not look like a good solution as it only considers overall execution time, and does not delve any deeper in to where my script spends it's time.
Is there a better way? Is there a feature of NYTProf that will let me compare the numbers from two runs of the same script (or very similar scripts) while suppressing statistically insignificant differences?