Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re^2: Perl solution for storage of large number of small files

by isync (Hermit)
on Apr 30, 2007 at 10:40 UTC ( [id://612731]=note: print w/replies, xml ) Need Help??


in reply to Re: Perl solution for storage of large number of small files
in thread Perl solution for storage of large number of small files

Actually I am thinking about completely switching over to the filesystem-only approach and stop toying around with this data-buckets idea. BTW: What is the maximum number of files on my ext3?

I've got a client-server architecture of scripts here - no emails, no imap server... It's a research project. (But challenges seem similar - thanks for the hint!)
A data-generation script gathers measuring data and produces the 40K-120K pakets of output, while a second script takes this output and makes sense of it thus enriching the meta-data index. Both scripts are controlled by a single handler which keeps the meta-data index and stores the data-pakets (enabling us to run everything in clusters). And that handler is where the bottleneck is. So I am thinking about taking off the storage part from the handler and let the data gatherer write to disk directly via NFS.

NFS was also the solution in the "larger files is quicker" paradox. My development machine tied a hash via NFS which resulted in this. Now, actually running the script on the server told me that the tie is always fast. The insert is fast most of the time (although every few cycles, when DB_File expands or so, it slows down..). But the untie takes forever on growing files...

The expected access pattern is mostly plain storage (gatherer), followed by one read on every stored paket (sense-maker). Then every few days an update/rewrite on all pakets involving possible resizing(gatherer again).

The "new toy" idea is now to use a set of disks tied together via NFS(distributed) or LVM(locally), mounted on subdirs building a tree of storage space leaves (replacing my few-files approach).
  • Comment on Re^2: Perl solution for storage of large number of small files

Replies are listed 'Best First'.
Re^3: Perl solution for storage of large number of small files
by jbert (Priest) on Apr 30, 2007 at 11:45 UTC
    The maximum number of files on a filesystem is limited by the number of inodes allocated when you create it (see 'mke2fs' and the output of 'df -i'). You can also tweak various settings on ext2/ext3 with tune2fs.

    As you probably already know, written data is buffered in many places between your application and the disk. Firstly, the perlio layer (and/or stdio in the C library) may buffer data - this is controlled by $| or the equivalent methods in the more modern I/O packages.

    Flushing will ensure the data is written to the kernel, but it won't ensure the kernel writes it to disk. You need the 'fsync' system call for this (and/or the 'sync' system call). You can get access to these via the File::Sync module.

    Note that closing a filehandle only *flushes* it (write userland buffers), it does not *sync* it (write kernel buffers).

    (If you're paranoid and/or writing email software, you may also want to note that syncing only guarantees that the kernel has successfully written the data to the disk. Most/all disks these days have a write buffer - there isn't a guarantee that data in that write buffer makes it onto persistent storage in the event of a power failure. You can get around this in various ways, but I'm drifting just a bit out of scope here...)

    The above is to suggest an explanation for 'untie' taking a long time (flushing lots of buffered data on close), and it's also something anyone doing performance-related work on disk systems should know about. In particular, it may suggest why sqlite seemed slow on your workload. For robustness, sqlite calls 'fsync' (resulting in real disk I/O) at appropriate times (i.e. when it tells you that an insert has completed).

    (Looking at one of your other replies...) If you are writing a lot of data to sqlite, you'll probably want to investigate the use of transactions and/or the 'async' mode. By default, sqlite is slow-but-safe. By default, writing data to a bunch of files is quick-but-unsafe. (But both systems can operate in both modes, you just need to make the right system calls or config options).

    If you're going to be doing speed comparisons between storage approaches, you need to be sure of your needs for data integrity and then put each storage system into the mode that suits your needs before comparing. (You may well be doing all this already - apologies for the length response if so).

      Actually, thank you for the lengthy reply!

      I already learned about sqlite's async mode, but was too lazy to recompile it and just switched the design to in-memory (sqlite was used only on the index part - I am not such a big fan of binary data in databases yet..)
      Pooling updates/writes (as in your transactions hint) was planned to streamline sqlite, but I pulled the plug on this when I opted for the in-memory approach.

      Thanks for all your help guys! Until I need to handle more than 25,000,000 files, plain fs will do (without re-inventing the wheel..)
        You're very welcome.

        For completeness, I should mention that modern versions of sqlite can be put into and out of synchronous mode with a pragma, rather than recompilation.

        (I've been very impressed with sqlite. It has limitations, but the docs are up-front about them, it is so easy to get started with and feels very robust to me.)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://612731]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (5)
As of 2024-04-19 15:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found