http://www.perlmonks.org?node_id=377257


in reply to Re^2: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help)
in thread Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)

My expectation is that most databases would use a well-known datastructure (such as a BTree) to store this kind of data. Which avoids a million directory entries, and also allows for variable length data. I admit that an RDBMS might do this wrong. But I'd expect most of them to get it right first try. Certainly BerkeleyDB will.

Using DB_File:

  1. 512,000,000 numbers appended randomly to one of 1,000,000 records indexed by pack 'N', $fileno

    Actual data stored (1000000 * 512 * 4) : 1.90 GB

    Total filesize on disk : 4.70 GB

    Total runtime (projected based on 1%) : 47 hours

  2. 512,000,000 numbers written one per record, indexed by pack 'NN', $fileno, $position (0..999,999 / 0 .. 512 (ave)).

    Actual data stored (1000000 * 512 * 4) : 1.90 GB

    Total filesize on disk : 17.00 GB (Estimate)

    Total runtime (projected based on 1%) : 80 hours* (default settings)

    Total runtime (projected based on 1%) : 36 hours* ( cachesize => 100_000_000 )

    (*) Projections based on 1% probably grossly under-estimate total runtime as it was observed that even at these low levels of fill, each new .1% required longer than the previous.

    Further, I left the latter test running while I slept. It had reached 29.1% prior to leaving it. 5 hours later it had reached 31.7%. I suspect that it might never complete.

Essentially, this bears out exactly what I predicted at Re: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help).


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon

Replies are listed 'Best First'.
Re^4: Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)( A DB won't help)
by Your Mother (Archbishop) on Jul 27, 2004 at 22:09 UTC

    That's great. You may have already tried this and it might be moot, but does presizing the "array" to 512_000_000 elements help with performance?

      Firstly, the performance wasn't really the issue at hand. The question was related to disk-storage rather than performance, The reason for highlighting the time taken was to excuse my having based my conclusions upon a miserly 1% of a complete test rather than having sat around for 2 days x N tests :)

      I'm not really sure what you mean by 'array' in the context of using DB_File?

      No array of 512_000_000 elements is ever generated. It's doubtful whether most 32-bit machines could access that much memory.

      The test program just looped 512_000_000 times (or would have if I had let it), and generated a random fileno and data value at each iteration. These are then used to fill in the values of a tied hash that is underlain by a disk-based btree DB file.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      "Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon

        I was thinking RECNO, not a BTREE (right?), and you're right, I missed the point :) So the idea is neither relevant nor any good, just a curiosity of whether the following...

        my @h ; tie @h, "DB_File", $filename, O_RDWR|O_CREAT, 0666, $DB_REC +NO; # starting with something like... $h[512_000_000] = "orange";

        would behave like presizing a real array which saves on overhead generated by growing and shrinking it. I just tried a bit of test code and some rather unscientific times of it shows it makes little difference and if it does, it's slightly against the pre-sizing.