Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl Monk, Perl Meditation

Re: Store a huge amount of data on disk

by BrowserUk (Pope)
on Oct 18, 2011 at 15:37 UTC ( #932178=note: print w/replies, xml ) Need Help??

in reply to Store a huge amount of data on disk

Each item has a unique (alphanummeric, 7-bit-ASCII) id

How long? (Ie. What range?)

  • Comment on Re: Store a huge amount of data on disk

Replies are listed 'Best First'.
Re^2: Store a huge amount of data on disk
by Sewi (Friar) on Oct 18, 2011 at 15:48 UTC

    About 16 to 32 bytes, any limit >= 16 bytes would be ok and may still be applied.

    I should be able to switch this into a 64 bit integer number if required, but I prefer the current alpha ids.

      Sounds like you're indexing your data by a hex-encoded digest?

      Given that you have 3 variable & possible huge sized chunks -- which most RDBMSs handle by writing the filesystem anyway -- associated with each index key, and your selection criteria are both fixed & simple, I'd use the filesystem.

      Subdivide the key into chunks that make individual directories contain at most a reasonable number of entries and then store the 3 sections in files at the deepest level.

      By splitting a 32-byte hex digest into 4-char chunks, no directory has more than 256 entries. The file-system cache will cache the lower levels and the upper levels will be both fast to read from disk and quick to search. Especially if your file-system hashes its directory entries.

      I'd write the individual chunks of the two text parts in separate files unless they will always be loaded as a single entity, in which case it might be slightly faster to concatenate them.

      Overall, given a digest of 8fbe7eb8c04c744406cca0aeb67e4f7f, I'd lay the directory structure out like this:

      /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/meta.txt /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/text1.000 /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/text1.001 /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/text1.002 /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/text1.... /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/text2.000 /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/text2.001 /data/8fbe/7eb8/c04c/7444/06cc/a0ae/b67e/4f7f/text2....

      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Thanks, I used that solution for a number of issues in the past, but I'm curious if our filesystem will deal with this number of files an still stay maintainable.

        I learned that any overall-operation (like movie, backup) may become arbitrary long and filesystems suffer from heavy file numbers over time.

        I thought about merging multiple items into one file (maybe replacing the last directory of your suggestion), but the very different item sizes and text blocks arriving unsorted but still must be finally delivered in their correct order are hard to handle.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://932178]
and all is calm...

How do I use this? | Other CB clients
Other Users?
Others chilling in the Monastery: (2)
As of 2018-05-23 04:33 GMT
Find Nodes?
    Voting Booth?