Beefy Boxes and Bandwidth Generously Provided by pair Networks
Come for the quick hacks, stay for the epiphanies.
 
PerlMonks  

Re: Finding files recursively

by holli (Abbot)
on Aug 04, 2019 at 18:09 UTC ( [id://11103874]=note: print w/replies, xml ) Need Help??


in reply to Finding files recursively

You are fighting the module. And you are doing a lot of unneccessary work. Consider
use File::Find; my @found; my $path = 'd:\env\videos'; my $target = '2012.avi'; find ( sub { # We're only interested in directories return unless -d $_; # Bail if there is an .ignore here return if -e "$_/.ignore"; # Add to the results if the target is found here push @found, $File::Find::name if -e "$_/$target"; }, $path); print "@found";
D:\ENV>perl pm10.pl d:\env\videos/2012 D:\ENV>echo.>d:\env\videos\2012\.ignore D:\ENV>perl pm10.pl D:\ENV>


holli

You can lead your users to water, but alas, you cannot drown them.

Replies are listed 'Best First'.
Re^2: Finding files recursively
by ovedpo15 (Pilgrim) on Aug 04, 2019 at 19:58 UTC
    Thanks for your suggestion but I don't understand the difference between both suggestions. Also, what is $target?. Thank you again.
      $target is just the filename you are looking for, "secret.file" in your case.
      The difference is that my code is exiting the wanted function immedeatly when it is not dealing with a directory. Only if there is a directory it is looking wether the target file is in that directory.

      Whereas your code looks at each and every file, calculates its' base path (albeit unneccessary, that info is already there in $File::Find::name). And then it takes that base directory to look for the target file.
      This, and this is the biggest slowdown, also means that you are testing the same directory number-of-entries-in-the-directory times.


      holli

      You can lead your users to water, but alas, you cannot drown them.
        Back with results! :)
        I tried my code and your code. My code ran for 13858 seconds and your code ran for 16968 seconds. I thought it will reduce the time a little but it didn't, maybe because the machine was being used by others at that time but it is a big difference. Do you have any other suggestions? 4 hours for searching is quite a lot of time :(

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11103874]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (4)
As of 2024-04-24 00:43 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found