If I understand you correctly, you're saying that when the job is i/o bound (because it's doing lots of "mv, zip or unlink"), CPU usage is "okay" (at 4-11%), but when there's relatively little need to alter disk content, CPU usage shoots up as high as 83%, which is bad for some reason.
The replies above about reducing how often you invoke "stat" will help some, but I would also tend to worry about having so many files in a single directory that everything gets bogged down, just because it takes so long to scan a directory that has so many files in it (especially if you're doing multiple stat calls on every file, instead of using all the information you get from a single stat call).
If you're seeing 190,000 files being created in less than a week (over 2700 a day), you might want to see if you can divide that up among different directories, to limit the number of files per directory. File age seems to be most important, and it's apparently ok to move things around, so you might want to try creating a directory for each date, and move files into daily directories according to their age. That would make things a lot easier to manage, in addition to reducing the overall load on both cpu and disk system.