Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

Re^2: Perl solution for storage of large number of small files

by isync (Hermit)
on Apr 30, 2007 at 16:24 UTC ( [id://612804]=note: print w/replies, xml ) Need Help??


in reply to Re: Perl solution for storage of large number of small files
in thread Perl solution for storage of large number of small files

Seems like exactly what I do now (after switching back to filesystem)!

And the buckets? Are you talking about DB_Files you store away in this subdir structure (thousands of small dbs effectively, where you put foo12345.txt)? Or are you just spreading the files by the hash and then you store /hash/path/to/file/foo12345.txt?

Replies are listed 'Best First'.
Re^3: Perl solution for storage of large number of small files
by BrowserUk (Patriarch) on Apr 30, 2007 at 16:49 UTC

    I concur with rhesa's method. I've used this before with considerable success.

    I just untarred a test structure containing a million files distributed this way using 3 levels of subdirectory to give an average of ~250 file per directory. I then ran a quick test of opening and reading 10,000 files at random and got an average time to locate, open, read and close each file of 12ms.

    #! perl -slw use strict; use Math::Random::MT qw[ rand ]; use Digest::MD5 qw[ md5_hex ]; use Benchmark::Timer; our $SAMPLE ||= 1000; my $T = new Benchmark::Timer; for my $i ( 1 .. $SAMPLE ) { $T->start( 'encode/open/read/close' ); my $md5 = md5_hex( int( rand 1e6 ) ); my( $a, $b, $c ) = unpack 'AAA', $md5 ; $T->start( 'open' ); open my $fh, '<', "fs/$a/$b/$c/$md5.dat" or warn "fs/$a/$b/$c/$md5 + : $!"; $T->stop( 'open' ); $T->start( 'read' ); my $data = do{ local $/; <$fh> }; $T->stop( 'read' ); $T->start( 'close' ); close $fh; $T->stop( 'close' ); $T->stop( 'encode/open/read/close' ); } $T->report; __END__ c:\test>612729-r -SAMPLE=10000 10000 trials of encode/open/read/close (112.397s total), 11.240ms/tria +l 10000 trials of open (110.562s total), 11.056ms/trial 10000 trials of read (158.554ms total), 15us/trial 10000 trials of close (365.520ms total), 36us/trial

    The files in this case are all 4k, but that doesn't affect your seek time. If you envisage needing to deal with much more than 1 million files, moving to 4 levels of hierarchy woudl distribute the million files at just 15 per directory.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re^3: Perl solution for storage of large number of small files
by rhesa (Vicar) on Apr 30, 2007 at 16:31 UTC
    Ah, sorry about that: my use of the word "bucket" was a bit sloppy. I just store the file (I deal with images, mostly).

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://612804]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others rifling through the Monastery: (6)
As of 2024-03-28 14:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found