Perl Monk, Perl Meditation | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Using a database (whether RDBMS or other) won't help you either save diskspace or improve performance.
Building your own index is equally unlikely to help. It takes at least a 4-byte integer to index a 4-byte integer. Plus some way of indicating which file each belongs to. With a million files, that a least 20 bits per. And you still have to store the data. I would use a single file with a fixed size chunk allocated to each file and store this in a compressing filesystem. (Or a sparse filesystem if you have one available.) I just wrote a 1_000_000 x 4096 byte records, each containing a random number (0--1023) of random integers. The notionally 3.81 GB (4,096,000,000) file, actually occupies 2.42 GB of disc space. So even though potentially half of every 'file' is empty, the compression compenates. It runs somewhat more slowly both the initial creation (I preallocated continguous space), and random access, than an uncompressed file, but not by much thanks to filesystem buffering. In any case, it will be considerably quicker than access via a RDBMS. Even if your files can vary widely in used size, nulling the whole file before you start will allow the compression mechanism to reduce the 'wasted' space to a minimum. A 10 GB file containing only nulls requires less that 40MB to store. The best bit is that using a single file saves a million directory entries in the filesystem, and having to juggle a million filehandles with associated system buffers and data structures in RAM. A nice saving. You will have to remember the 'append point' for each of the files, but that is just a million 4/8 bytes numbers. A single file of 4/8 MB. In reply to Re: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help)
by BrowserUk
|
|