Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw

Re: Read multiple text file from bz2 without extract first

by bms (Monk)
on Mar 27, 2012 at 03:16 UTC ( #961818=note: print w/replies, xml ) Need Help??

in reply to Read multiple text file from bz2 without extract first

Well, for your first question see IO::Uncompress::Bunzip2.

Second answer is compression method. Just a .bz2 is a file/archive compressed with just bz2 and is a group of tarballs compressed into a bz2 archive.

  • Comment on Re: Read multiple text file from bz2 without extract first

Replies are listed 'Best First'.
Re^2: Read multiple text file from bz2 without extract first
by prescott2006 (Acolyte) on Mar 27, 2012 at 05:44 UTC
    For a huge bz2, is decompress or read it directly into a text file more efficient in term of speed and CPU usage?

      To clarify, I guess bms might have been misunderstood:

      A bz2 file is not called an "archive" exactly because it cannot contain more than one file. bzip2 (like compress, gzip and lzma) can only compress a single file, the archiving of several files into such a compressed file is usually done using tar which in turn cannot compress. This is different from programs like zip, lha or rar that do the archiving and compression all in one. The idea of the Unix-style approach is that any of the compressors can be used for other things than compressing archives too (like in a pipe to compress network transfers) while when you're archiving you can combine tar with any of these compressors for different speed/compression tradeoffs.

      Now, do you have a tar.bz2 archive with texts that you want to read or are the texts individually compressed? I suppose it's the former, so you could use Archive::Tar that transparently decompresses compressed tar archives and lets you read individual entries.

      Regarding efficiency, it depends. Unlike zip-style archivers that have a table of contents at the end, you have to read a tar archive completely to get the contents. If it contains two files of a gigabyte each, you have to decompress the full two gigs just to get the names, and then again to get the contents. Then it might be worth decompressing it to disk first. If you know the names or know that you need everything though, decompressing on the fly will usually be faster. If the archive is not the British National Corpus or worse, it probably doesn't matter :)

        Thanks for clarifying, I got a bit swirly there. Was thinking of something else.

        So if let say I have test.bz2 which contain test.txt which is 1 GB in size, extract test.txt to disk and then process, or directly read test.txt into another text file without extraction consume the same amount of time?

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://961818]
and one hand claps...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (8)
As of 2017-10-21 21:43 GMT
Find Nodes?
    Voting Booth?
    My fridge is mostly full of:

    Results (271 votes). Check out past polls.