|Come for the quick hacks, stay for the epiphanies.|
Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)by rjahrman (Scribe)
|on Jul 24, 2004 at 05:02 UTC||Need Help??|
rjahrman has asked for the wisdom of the Perl Monks concerning the following question:
Let's say that I have 1,000,000 binary files, each containing a variable number of 4-byte integers (though that is irrelevant). The files are going to be built at the same time, such that an integer will be appended to one file, then another integer will be appended to an arbitrary different file, and the cycle so repeats.
Anyway, because the majority of these files will be far under the 4KB standard cluster size, storing them as separate files would waste a lot of disk space--which is limited in this project. So, I'm going to attempt to put all of the files together into one huge file, and then have a separate ID-to-file-location index in a separate file (though it will be stored in RAM as arrays and/or hashes until the building of the mega-file is complete). Simple enough, right? Until I realized one _major_ problem with this approach . . .
The fact that the files are built at the same time! For every integer that is added in the middle of the mega-file (e.g. all of them), the location of every sub-file would have to be changed!
Now my thinking is to have one array that stores the size of every sub-file, as well as a separate array that, for every 1000 sub-files or so, has the total size of all of the sub-files before it. Hence, to get the file location of a sub-file while building the mega-file I would only have to go back to the last "marker" and then add to it the sizes of all sub-files between the marker and the desired location. If I did this, I could also do my own very small "cluster" size, such that the numbers would only have to be updated every 100 entries or so, but the wasted disk space would be minimal.
My question is (finally--grin), how would you attack this problem? Any ideas?
HUGE thanks in advance!