Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)

by dragonchild (Archbishop)
on Jul 24, 2004 at 21:08 UTC ( #377180=note: print w/replies, xml ) Need Help??


in reply to Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)

After reading through the problem and offered solutions ... I have a few thoughts:
  1. 4KB standard cluster size. That very phrase implies you unconciously know that cluster size can be changed. I had always been under the impression that block size was 512 bytes and that you could alter your cluster:block ratio from 8:1 to 1:1. .5K should do you just fine, right?
  2. The inode issue is going to be your bigger issue. Potentially, you might want to partition a bunch of times, but that's annoying to manage.
  3. Your entire problem statement is poorly defined. Granted, there is obviously information you don't feel that you can post on a public board, and that information would probably fill some of the gaps I have in my understanding. But, I think your problem really boils down to the following:
    1. You need N buckets, where N is some really really large number.
    2. Each bucket will contain some number of values each taking up 4 bytes of space.
    3. Each bucket may contain a different number of these values.
    4. The maximum number of values in any bucket will be M.
    5. Each bucket has a unique identity.
    6. The buckets will be built on an infrequent basis.
    7. The buckets will be searched on a very frequent basis.
    8. Finding the right bucket as fast as possible is the overriding priority.
    9. Reducing the space taken for this solution is a very high priority.
    I am assuming the 4th statement, but it's a reasonable assumption. M may be very large relative to the average number in any bucket, but that's ok. This would mean you need, at most, 4 * M * N bytes. But, there is often a size limitation in how big a given file can be. Let's say that value is 2GB (a common value, though not the only one). So, you would need (4 * M * N) / (2 * 1024 * 1024 * 1024) files (rounded up, of course). Oh, and an index file. This file would need to be (4 + 1 + 4)* N bytes long. 4 for the bucket identifier, 1 for the file identifier, and 4 for the location to seek to in the file.

    I'm assuming that you will never have more than 256 files, because that would mean you had, potentially, a half-terabyte of data you were working with. If that was the case, you wouldn't be complaining about size considerations.

    I'm also assuming you don't have more than 238,609,294 buckets. If you did, you would need a second index file.

Given all that, you could take the following approach, if you had a lot of temp space to work in.

  1. Generate your data, using a fixed record size of M values. Let's say M is 1024. This would mean you would fit 512 * 1024 records in each 2GB file. So, if you had 5 million records, you would end up with 10 2GB files - a total of 20GB. Do not generate the index file yet.
  2. After you've generated your data, go through it all again. Now, you're going to compact your files as well as generate your index. I'm going to assume you have a value that isn't legal that you can use as a flag. Let's say it's -1, which is usually represented by 0xFFFF...FFFF. So:
    1. Read in the next record. (sysread() the next 4096 bytes.) Strip off the trailing 0xFFFF...FFFF bytes. See if you have enough space in the compacted file you're writing to. If you do, write to it and update your index. If you don't, close that file, open a new one, increment the fileno in your index, and write it out.
    2. Once you're done reading a file, delete it from temp.
    3. If you average 150 values per bucket, this would translate to (4 * 150 * 5,000,000), or 2 data files and one index file. One data file would be right around 2GB and the other would be roughly 813MB. The index file would end up being (9 * 5,000,000) bytes, or a little under 43MB.
  • This, however, does require around 22GB of temp space, at the worst case. Depending on how sparse the first file you compact is, it may require closer to 20.2GB. It also requires a bunch of processing time. The compacting process would probably take, given a dual 2.6 Xeon with 4GB of RAM and not much else on the box ... a couple hours. The writing portion of the generating process wouldn't be very bad at all.

    ------
    We are the carpenters and bricklayers of the Information Age.

    Then there are Damian modules.... *sigh* ... that's not about being less-lazy -- that's about being on some really good drugs -- you know, there is no spoon. - flyingmoose

    I shouldn't have to say this, but any code, unless otherwise stated, is untested

    • Comment on Re: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)
  • Replies are listed 'Best First'.
    Re^2: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)
    by mhi (Friar) on Jul 24, 2004 at 22:12 UTC
      I may have missed something here and therefore the following approach might be oversimplified.

      I'd write the data to a flat file in the first pass, with the file structure being lines with key-value-pairs. The key would represent the "filename" and the value one of the "4-byte" values of the OP. Make sure a new file is started before the max filelength for the OS or the FS is reached.

      If the "filename" is too long, I would create a separate file mapping each "filename" to a shorter key. Obviously each key will occur as many times as there are values for it, each time on a separate line. The order of the values (should they matter) will be preserved in the order of the lines.

      In the second pass, once all the values have been written to the file(set), analyze it once for each key and write all of the values per key into a single arbitrary-length record of a new target file(set). In the third pass, create the index on the target file(set).

      In this way the first pass file(set) will accept values for keys in any order, appending them to the end of the file and will not waste space for large records that won't be needed most of the time. The second pass will take a whole lot of time, but as I understand it time is not the issue here.

      Generally, if space is a major consideration, a DBMS is the last thing I would look at. There's just too much overhead there, in order to make it work with all kinds of data structures.

      Update: Corrected spelling mistake. Added Comment on DB.

    Log In?
    Username:
    Password:

    What's my password?
    Create A New User
    Domain Nodelet?
    Node Status?
    node history
    Node Type: note [id://377180]
    help
    Chatterbox?
    and the web crawler heard nothing...

    How do I use this? | Other CB clients
    Other Users?
    Others chilling in the Monastery: (3)
    As of 2021-10-24 14:52 GMT
    Sections?
    Information?
    Find Nodes?
    Leftovers?
      Voting Booth?
      My first memorable Perl project was:







      Results (89 votes). Check out past polls.

      Notices?