http://www.perlmonks.org?node_id=377257


in reply to Re^2: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help)
in thread Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)

My expectation is that most databases would use a well-known datastructure (such as a BTree) to store this kind of data. Which avoids a million directory entries, and also allows for variable length data. I admit that an RDBMS might do this wrong. But I'd expect most of them to get it right first try. Certainly BerkeleyDB will.

Using DB_File:

  1. 512,000,000 numbers appended randomly to one of 1,000,000 records indexed by pack 'N', $fileno

    Actual data stored (1000000 * 512 * 4) : 1.90 GB

    Total filesize on disk : 4.70 GB

    Total runtime (projected based on 1%) : 47 hours

  2. 512,000,000 numbers written one per record, indexed by pack 'NN', $fileno, $position (0..999,999 / 0 .. 512 (ave)).

    Actual data stored (1000000 * 512 * 4) : 1.90 GB

    Total filesize on disk : 17.00 GB (Estimate)

    Total runtime (projected based on 1%) : 80 hours* (default settings)

    Total runtime (projected based on 1%) : 36 hours* ( cachesize => 100_000_000 )

    (*) Projections based on 1% probably grossly under-estimate total runtime as it was observed that even at these low levels of fill, each new .1% required longer than the previous.

    Further, I left the latter test running while I slept. It had reached 29.1% prior to leaving it. 5 hours later it had reached 31.7%. I suspect that it might never complete.

Essentially, this bears out exactly what I predicted at Re: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help).


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon