Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot

Gather file count in directory

by Anonymous Monk
on Jan 05, 2012 at 12:36 UTC ( #946382=perlquestion: print w/replies, xml ) Need Help??
Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

I have to gather the total files on a directory for which I used the File::Find and Win32::Ole modules , but the results given by these is very slow as the number of disk size increases. What is the fastest way to find total number of files in a directory(I don't need any other statistics except the total files in the directory) ?

Replies are listed 'Best First'.
Re: Gather file count in directory
by umasuresh (Hermit) on Jan 05, 2012 at 12:46 UTC

      I am in need of a perl script..

Re: Gather file count in directory
by ~~David~~ (Hermit) on Jan 05, 2012 at 14:22 UTC
    I always like the File::Slurp module.
    use File::Slurp; my $dir = 'somedir'; my @files = read_dir( $dir ); my $num_of_files = scalar @files;
      If you're willing to slurp in the entire filelist, you can also do
      my $num_of_files = scalar @{[<*>]};

      mr.nick ...

      The File::Slurp method is not giving the total number of files in a drive (as in C:// etc). It only gives an upper count of the files in directory doesnt give the count of files in folders ..

Re: Gather file count in directory
by trizen (Hermit) on Jan 06, 2012 at 01:20 UTC
    For a decent number of files, you can use:
    sub get_files_count { my ($dir) = @_; opendir my $dir_h, $dir or return 0; # Number of files return scalar grep { -f "$dir/$_" } readdir $dir_h; # Number of files+dirs return scalar @{[readdir $dir_h]} - 2; } print get_files_count('./');
    Or for a larger number of files, you can use:
    sub get_files_count { my ($dir) = @_; opendir my $dir_h, $dir or return 0; my $files = 0; while(defined(my $file = readdir $dir_h)){ if($file eq '.' or $file eq '..'){ next; } # Uncomment the bellow line to count only files # next unless -f "$dir/$file"; ++$files; } closedir $dir_h; return $files; } print get_files_count('./');

      I have tried this subroutine already but it takes a lot of time to count total files as disk size increases(for a drive containing 300gb used space it takes an hour to calculate the total files). I want a method which is a bit faster??

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://946382]
Approved by Corion
[Corion]: Oh yay. The (external, not guided by me) programmers have chosen Moose+DBIx::Class for some implementation, and now seem to do 1+n SELECT statements for each row, as is usual when using ORMs.
[Corion]: So maybe I should investigate how to plug in a cache in front of DBIx::Class so I can do a ->selectall_hashre f and then satisfy the "sub"-selects from that cached single SELECT statement ...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (8)
As of 2017-09-25 10:57 GMT
Find Nodes?
    Voting Booth?
    During the recent solar eclipse, I:

    Results (279 votes). Check out past polls.