Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic

Filesystem: How to grab subset of directory structure

by P0w3rK!d (Pilgrim)
on Jun 10, 2003 at 14:30 UTC ( #264685=perlquestion: print w/replies, xml ) Need Help??

P0w3rK!d has asked for the wisdom of the Perl Monks concerning the following question:

Fellow Monks,

Give the following code, is there anyway to grab a subset of files within a directory? If I use this code below on a directory containing 20,000 .xml files, it takes *forever* to process and pegs the CPU.

Thank you :)


sub getFiles { my $strDir = shift; my @aryFiles = (); # add all .xml files if (opendir (MYFILEHANDLE, $strDir)) { @aryFiles = map { "$_" } grep { /.xml/i} readdir MYFILEHANDLE; } else { print "Error: Can't read $strDir: $!"); } close (MYFILEHANDLE); return @aryFiles; }

Replies are listed 'Best First'.
Re: Filesystem: How to grab subset of directory structure
by Joost (Canon) on Jun 10, 2003 at 14:47 UTC
    Your map is not nessicary and the regex could be a little more optimized.

    this will be slightly faster:

    aryFiles = grep /\.xml$/i, readdir MYFILEHANDLE;

    But my guess is that on your FS putting 20,000 files in one directory is what is making it slow. Probably you're better off making a couple of subdirectories; about 500 files each in them might be a lot faster. - This all depends on your filesystem so YMMV.


      In this version of the application the producer is sending me the files and dumping them into one directory. Version 2.0 will use subdirectories.

      The problem is, that even with subdirectories, when the consumer (my code) cannot send files to a remote FTP site (server is down or whatever) the files will queue up. Hence, I am getting tens of thousands of files. Just last weekend the FTP server was down and 35,000 XML files were queued up. Even with subdirectories the problem still remains.

      Thank you for your reply :)


Re: Filesystem: How to grab subset of directory structure (less RAM?)
by tye (Sage) on Jun 10, 2003 at 16:34 UTC

    Perhaps stuffing the whole list into memory just to throw most of them away is making things slow (doesn't seem very likely with only 20,000 file names, but worth considering):

    while( defined( $_ = readdir MYDIRHANDLE ) ) { push @aryFiles, $_ if /\.xml$/i; } closedir MYDIRHANDLE; # ^^^ note this

    I'd recommend the much simpler glob("*.xml"), but it appears that there are some alarming inefficiencies there that need investigation. |:

                    - tye
      I am taking a snapshot of the XML files in the directory at time (t). I process all the files I read in, sleep for a little while, then start the process again. Looping all the way... BTW-I already put everything into memory. The "read" of the files in the directory is what is pegging the CPU.

      Thanks for your reply. :)


Re: Filesystem: How to grab subset of directory structure
by BrowserUk (Pope) on Jun 10, 2003 at 17:22 UTC

    If the hit of processing the 20,000 files in one chunk is giving you greif, don't do that:)

    Rather than scanning all the files and building an array of the results in one go, and then processing them in what ever way. you have to, why not overlap the finding of the files with the processing?

    You could do this by getting one file at a time from readdir and the processing them in a while loop

    opendir my $dh, '/path/to/files', or die ...; while( my file = readdir $dh ) { next unless $file =~ m[\.xml$]; # do something with the file } closedir $dh;

    However, you could (maybe) get a more efficient overlap if you read the directory in one thread and process the matched files in another. Depending on the nature of the load on the machine in question, the filesystem and various other inponderables, this should allow the processing thread to process stuff whilst the read thread is blocked in IO states waiting for the OS.

    This might be a starting point if the idea interests you. Try varying the -N option to adjust the buffer size so that the processing thread always has a file to precess when it is ready, without the size of the Q becoming unwieldy.

    #! perl -slw use strict; require 5.008; use threads qw[yield]; use threads::shared; use Thread::Queue; use vars qw[$N]; die "Usage: $0 [-N=nn] dir .*\.xml" unless @ARGV == 2; $N ||= 100; my $signal : shared = 0; my $Q = Thread::Queue->new(); sub readdir_asynch { my ($dir, $mask) = @_; print $mask; opendir my $dh, $dir or die "Couldn't open $dir"; while( not $signal ) { yield if $Q->pending > $N; my $file = readdir( $dh ); last unless defined $file; $Q->enqueue( $dir . '/' . $file ) if $file =~ m[^$mask$]; } $Q->enqueue( "QUITING!!" ); } my $thread = threads->create( \&readdir_asynch, $ARGV[0], $ARGV[1] ); yield; while( ( my $file = $Q->dequeue ) ne 'QUITING!!' ) { printf "%s [%d]\n", $file, -S $file; } $thread->join;

    Setting $signal to a non-zero value will cause the read thread to terminate before it completes reading the files, and allow a clean exit from the main thread, if that becomes necessary.

    Caveat: You might need to change the 'QUITING!!' message if your ever likely to have a file with that name.

    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

      Great suggestion. I will attempt to incorporate this into Version 2.0.

      Thanks! :>


Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://264685]
Approved by EvdB
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others rifling through the Monastery: (5)
As of 2019-12-07 16:09 GMT
Find Nodes?
    Voting Booth?
    Strict and warnings: which comes first?

    Results (162 votes). Check out past polls.