|laziness, impatience, and hubris|
File::Find redux: how to limit hits?by 914 (Pilgrim)
|on Jun 04, 2002 at 00:32 UTC||Need Help??|
914 has asked for the
wisdom of the Perl Monks concerning the following question:
In this earler SoPW node, i asked for and recieved some great advice about File::Find and how to limit it's results similarly to a find . -name foo.txt -maxdepth 3 -mindepth 3
This time around, i need some help figuring out how to make File::Find return (ie, stop searching/producing results) after an arbitrary number of hits.
Where $MAXHITS is set to the maximum number of results i need. (other than that extra arg, this is the exact find(); call i'm using.
The story is this... i'm using this function to find all the files necessary to recreate an index file at bianca.com, and to actually create that file.
My current script (on my scratchpad) works for this quite well (i'll gladly provide a tarball of test filestructure to work with, just ask), without fail.
What i'm concerned about is that my test files/filestructure wasn't too big, but some of the 'live' directories that this may be working on could have literally thousands (tens, maybe 100) of files, in a very complicates directory tree. (i know, i know... but when we wrote the CGI that runs the BBS (in C, even.. heretics i know!) the filesystem-as-ersatz-database seemed OK) of files.
This introduces a huge time lag as the find function walks the tree. Since the files are invariably 'found' in the order of last-created (i'm not sure why, but it's so..) it's quite safe for me to say "stop finding once you've found n hits" since it's sure that those n files are the most recent n files, and are the ones i'm interested in.
Is there a way to do this? Does it involve hacking the module? i'm willing to give that a shot... but if there's a better non-wheel-reinventing solution...
And, if this *does* involve modifying the module, can/should/how do i post the changes so that others can use it?
as always, thanks!