I concur with rhesa's method. I've used this before with considerable success.
I just untarred a test structure containing a million files distributed this way using 3 levels of subdirectory to give an average of ~250 file per directory. I then ran a quick test of opening and reading 10,000 files at random and got an average time to locate, open, read and close each file of 12ms.
#! perl -slw
use strict;
use Math::Random::MT qw[ rand ];
use Digest::MD5 qw[ md5_hex ];
use Benchmark::Timer;
our $SAMPLE ||= 1000;
my $T = new Benchmark::Timer;
for my $i ( 1 .. $SAMPLE ) {
$T->start( 'encode/open/read/close' );
my $md5 = md5_hex( int( rand 1e6 ) );
my( $a, $b, $c ) = unpack 'AAA', $md5 ;
$T->start( 'open' );
open my $fh, '<', "fs/$a/$b/$c/$md5.dat" or warn "fs/$a/$b/$c/$md5
+ : $!";
$T->stop( 'open' );
$T->start( 'read' );
my $data = do{ local $/; <$fh> };
$T->stop( 'read' );
$T->start( 'close' );
close $fh;
$T->stop( 'close' );
$T->stop( 'encode/open/read/close' );
}
$T->report;
__END__
c:\test>612729-r -SAMPLE=10000
10000 trials of encode/open/read/close (112.397s total), 11.240ms/tria
+l
10000 trials of open (110.562s total), 11.056ms/trial
10000 trials of read (158.554ms total), 15us/trial
10000 trials of close (365.520ms total), 36us/trial
The files in this case are all 4k, but that doesn't affect your seek time. If you envisage needing to deal with much more than 1 million files, moving to 4 levels of hierarchy woudl distribute the million files at just 15 per directory.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|