http://www.perlmonks.org?node_id=377259


in reply to Combining Ultra-Dynamic Files to Avoid Clustering (Ideas?)

Rather than writing 1,000,000 files x 4096-bytes, turn the problem around.

Write 1024 files x 4,000,000 bytes.

The 'file' number x 4 becomes the offset into the file. The 'file position', becomes the file number.

This addresses both the > 2 GB problem and the 'maximum filesize assumption' problem.

Many of the 1024 files would be sparsly populated, but from what I read, XFS and reiserFS support this for Linux and placing the files in a compressed directory would deal with that on Win32.


Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
"Memory, processor, disk in that order on the hardware side. Algorithm, algoritm, algorithm on the code side." - tachyon
  • Comment on Re: Combining Ultra-Dynamic Files to Avoid Clustering (A better way?)