|XP is just a number|
Re: Re: Efficient processing of large directoryby BrowserUk (Pope)
|on Oct 03, 2003 at 00:10 UTC||Need Help??|
It's worth noting that if your trying to find a subset of the files contained in a subdir, rather than processing them all, then using <*.txt> is considerably faster that using either File::Find or opendir/readdir/closedir. At least that is the case under Win32 as the wildcard matching is done by the OS and only those files matching are past back.
In the examples below, the first comparison shows selecting all 17576 files in a subdirectory. In this case, glob and File::Find come out pretty much even.
In the second comparison, a subset of 676 files is selected from the 17000 using a wildcard. In this case, the glob runs 650% faster as it is only processing the 676, rather than looping over the whole 17000+.
Of course, if any real processing was being done rather than just counting the files, the difference would rapidly disappear.
In this case, the OP's use of the word "efficient" was most likely to do with the memory used by slurping all 17000 names in to memory rather than speed, but if not all those 17000 file are .txt files, the time saved might be worth having.
Examine what is said, not who speaks."Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
If I understand your problem, I can solve it! Of course, the same can be said for you.