Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re: Perl solution for storage of large number of small files

by salva (Abbot)
on Apr 30, 2007 at 09:50 UTC ( #612724=note: print w/replies, xml ) Need Help??


in reply to Perl solution for storage of large number of small files

you can try using another database backend, for instance DBI + DBD::SQlite.
  • Comment on Re: Perl solution for storage of large number of small files

Replies are listed 'Best First'.
Re^2: Perl solution for storage of large number of small files
by isync (Hermit) on Apr 30, 2007 at 10:01 UTC
    Been there, done that. Actually for the meta-data index of the heavy-load storage...
    The first incarnation was a DBM:mldb. The second version sqlite, with which I ran into a heavy disk IO overhead inserting/updating meta-data, now the index is in-memory as plain data structure...

    So, do you actually recommend sqlite as storage for binary data?
      So, do you actually recommend sqlite as storage for binary data?

      Well, I don't recommend neither disrecommend it. I was only suggesting you should try another backend!

      Which database is the best for a given problem, does not depend exclusively on the data structures but also on the usage pattern.

      Anyway, if you need to access 2GB of data randomly, there is probably nothing you can do to stop disk trashing other than adding more RAM to your machine, so that all the disk sectors used for the database remain cached.

        Hi isync and salva, interesting topic.

        Anyway, if you need to access 2GB of data randomly, there is probably nothing you can do to stop disk trashing other than adding more RAM to your machine, so that all the disk sectors used for the database remain cached.

        In this situation - more data than memory, but not loads more - I've found memory mapping works well. In my situation the data accesses were randomly scattered but with a non-uniform distribution - if that makes sense. I.e. although the access wasn't sequential, some data was accessed more often than others. So memory mapping meant that the often-access data stayed cached in ram.

        Any decent database should be able to do pretty much the same thing - as long as you configure it with a big query cache - although disk access will be slower than for memory mapping.

        The real problem comes if you're making a lot of changes to the data, which busts your cache...

        Best wishes, andye

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://612724]
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others contemplating the Monastery: (9)
As of 2018-06-20 07:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Should cpanminus be part of the standard Perl release?



    Results (116 votes). Check out past polls.

    Notices?