|go ahead... be a heretic|
Re: Optimizing performance for script to traverse on filesystemby Marshall (Abbot)
|on Feb 01, 2012 at 23:45 UTC||Need Help??|
I would definitely recommend using File::Find or one of its variants (and there are some fancy ones) to do the directory scanning. This eliminates the need for you to write any recursive code yourself.
I didn't test the code below and there is bound to be some kind of mistake in it. But this is to give you an idea of another approach.
The most simple variant of File::Find calls a subroutine for every file or directory underneath the starting place. A localized variable $File::Find::name contains the full path of where we currently are (dir name or file name). I suggest that you just run the wanted sub with just the print line at the end (in comments) to see the default order of the descent.
I think this collects the data that you wanted? But not 100% sure that I got everything.
Since you are interested in performance one not so obvious point, is the default _ (underscore) variable. Not $_, just plain "_". When you do a file test operation, all this "stat" info gets collected via a file system request. If you want another file test on the same file, (like -s,-f,-d), using the "_" variable means to use use previous stat info without making another expensive call to the file system.
Hope this at least provides some fuel for thought and further improvement.
sort %mailboxesWithMessages to get the biggest one(s). Cycle thorough %allMailboxes - any key there that doesn't exist in the other hash means a mbox with no messages (empty).