http://www.perlmonks.org?node_id=707140


in reply to Re^2: Optimize my code with Hashes
in thread Optimize my code with Hashes

the code took:67358 wallclock secs (29935.47 usr 60.38 sys + 0.00 cusr 0.00 csys = 29995.85 CPU) in considering each entry from PeopleFirst extract ..

That's interesing. So 29935/67358=44% of your time was spent on user CPU. That is significant, and you might want to look into profiling the app's CPU usage (using Devel::Profile and Devel::DProf).

Of course, it also means that 56% of your time is spent doing other things. If that is net latency then you'd do well to look at bulk import/export instead.

Your timestamp logging appeared to show ~6secs for one request, is that right? That can't be representative, since as noted elsewhere in this thread, you'd never manage to do 50k updates in 18 hours if each takes 6s.

Lastly, if you do profile the app, then it will probably benefit you to produce a cut-down version which runs more quickly. This is useful because the profilers slow things down and generate large amounts of data - they'll probably break on such a big run.

Also, having a more quickly repeatable test case (e.g. ~10mins) will greatly accelerate your ability to test ideas on code and algorithm changes.

However, the hard part is knowing if your cut-down test case has the same performance profile as your main job run.

Another thought: if the 'missing' 56% of your time is overnight you might be sharing a network with a backup job, or something else which saturates the net and makes your network response times go very slowly.