Problems? Is your data what you think it is? | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
To add a little to what Adrian says, periodicity could be a problem. Many code segments exhibit very periodic behaviour.
As in all sampling Nyquist says that close to the point where the sampling rate and half the execution rate coincide you will get an error, or alias given by the difference of the two. If any two metrics you are measuring are opposite in magnitude on opposite sampling cycles you will get a zero average and completely lose the magnitude. I know we don't get negative memory usage etc, but the point is illustrated. Since the profiler is running at a fraction of the execution frequency the results may vary wildly between runs. Introducing a random time offset to every sample will give you the effect similar to dithering, which actually increases the accuracy given a few runs to average over. Sounds like a useful tool, good work. In reply to Re: Dreaming of a Better Profiler
by andyf
|
|