http://www.perlmonks.org?node_id=1017630


in reply to Re^2: Handling very big gz.Z files
in thread Handling very big gz.Z files

mbethke,

I think we agree!

What I referred to is that 'gzip' does great in compressing text, and the result is a binary file. Now that file can be compressed further by 'compress'. But I haven't done that since the RT or early RS\6000 days. I don't even know if 'compress' on AIX 6.1 or 7.1 exists( my in-house box with AIX 5.2 has it ), but I found it "funny" to see the ".qz.Z" and remembered when it was done. I pointed it out in case the file was being created differently then the OP thought.

I just fired up last week a Debian AMD box with 8-core and 4-2TB drives.

Why bother with compression!

Regards...Ed

"Well done is better than well said." - Benjamin Franklin

Replies are listed 'Best First'.
Re^4: Handling very big gz.Z files
by mbethke (Hermit) on Feb 07, 2013 at 16:08 UTC

    Yup, "gz.Z" is strange indeed, although I don't think the extra compress would gain anything :)

    Compression is even more interesting on these huge machines we have nowadays than it was before, since someone found it's usually faster to compress memory to be "swapped" and keep it in RAM than to write it to disk. Or for doing anything else disk-based for that matter as CPU speed has grown much faster than disk speed. The BNC the OP is dealing with has 100 million word forms and would fit in memory on most machines but meanwhile Google has raised the bar to a trillion word forms. They don't distribute that as text but even their n-gram lists are 24 GB gzipped. If your HD sustains 100 MB/s that's 4 minutes just to read it into memory, or 8 if it's twice the size uncompressed. But on a single core I can zcat at 154 MB/s so it's just faster to keep the stuff gzipped and unzip on the fly. Unzipping to a tempfile and reading that back is much slower on all but the fastest SSDs.