Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re^2: Handling very big gz.Z files

by mbethke (Hermit)
on Feb 07, 2013 at 05:16 UTC ( [id://1017556]=note: print w/replies, xml ) Need Help??


in reply to Re: Handling very big gz.Z files
in thread Handling very big gz.Z files

My background of AIX, '.Z' is used by the 'compress/uncompress' system commands and '.gz' is used with 'gzip/gunzip' system commands. Are you sure that the file wasn't created that way? 'compress' gets a 10% additional compression over 'gzip' and when disk drives were small, was a big deal. Today is not worth the CPU cycles.

OT: I've yet to see the file that compress crunches to a smaller size than gzip. Actually I thought for a long time (before I heard of the patents) that everyone had ditched compress for gzip because compress sucks so badly in comparison. Today, people burn a lot more CPU cycles using lzma, xz & Co. for a much better compression than either.

Replies are listed 'Best First'.
Re^3: Handling very big gz.Z files
by flexvault (Monsignor) on Feb 07, 2013 at 12:12 UTC

    mbethke,

    I think we agree!

    What I referred to is that 'gzip' does great in compressing text, and the result is a binary file. Now that file can be compressed further by 'compress'. But I haven't done that since the RT or early RS\6000 days. I don't even know if 'compress' on AIX 6.1 or 7.1 exists( my in-house box with AIX 5.2 has it ), but I found it "funny" to see the ".qz.Z" and remembered when it was done. I pointed it out in case the file was being created differently then the OP thought.

    I just fired up last week a Debian AMD box with 8-core and 4-2TB drives.

    Why bother with compression!

    Regards...Ed

    "Well done is better than well said." - Benjamin Franklin

      Yup, "gz.Z" is strange indeed, although I don't think the extra compress would gain anything :)

      Compression is even more interesting on these huge machines we have nowadays than it was before, since someone found it's usually faster to compress memory to be "swapped" and keep it in RAM than to write it to disk. Or for doing anything else disk-based for that matter as CPU speed has grown much faster than disk speed. The BNC the OP is dealing with has 100 million word forms and would fit in memory on most machines but meanwhile Google has raised the bar to a trillion word forms. They don't distribute that as text but even their n-gram lists are 24 GB gzipped. If your HD sustains 100 MB/s that's 4 minutes just to read it into memory, or 8 if it's twice the size uncompressed. But on a single core I can zcat at 154 MB/s so it's just faster to keep the stuff gzipped and unzip on the fly. Unzipping to a tempfile and reading that back is much slower on all but the fastest SSDs.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1017556]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (2)
As of 2024-03-19 04:22 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found