Beefy Boxes and Bandwidth Generously Provided by pair Networks Frank
Perl-Sensitive Sunglasses
 
PerlMonks  

Re: gzseek for perl filehandles

by furry_marmot (Pilgrim)
on Dec 24, 2010 at 17:51 UTC ( #879071=note: print w/ replies, xml ) Need Help??


in reply to gzseek for perl filehandles

Others have said something like this, but I thought I'd throw my two cents in and try to nail it on the head.

On the face of it, since the text you seek is compressed, random-access seeking is not possible. To seek 150K into the compressed file is meaningless in the context of the uncompressed text. Seek 150K into the compressed file, uncompress it, and maybe you're 300K into the uncompressed text, or maybe 1.5Mb. You have to uncompress a bunch of it AND THEN do your seeks. Consider what BrowserUK quoted in the first reply to your post:

If file is open for reading, the implementation may still need to uncompress all of the data up to the new offset. As a result, gzseek() may be extremely slow in some circumstances.

In other words, some, or most, or all of the file must be uncompressed before you can do your random seeks -- and that's for each call to gzseek()! Performance will suck heavily.

In your responses, you've made it clear that you're still looking for something that will do random seeks into a compressed file. So I repeat, IT'S NOT POSSIBLE. It's just a parlor trick. Whatever module you find or write will either uncompress the file a little at a time to find what you're looking for, or will uncompress the whole file and search through that.

If you can't get away from the size of the file, you might consider rethinking your approach. Can you turn the process around? Can you uncompress the file a block at a time, and then process the phrases as you read them, rather than seek each phrase separately in the file (which it sounds like you want to do)? If you really, truly have to search the whole file for each phrase, the fastest solution is probably to uncompress it yourself (keeping the compressed version so you don't have to re-compress it), do whatever you're doing, and then delete it.

My two cents.

--marmot


Comment on Re: gzseek for perl filehandles
Re^2: gzseek for perl filehandles
by Anonymous Monk on Dec 27, 2010 at 13:56 UTC
    OP here,

    I think I found some equivalent off-the-shelf solutions:

    fusecompress and fuse-zip and compFUSEd.

    If anyone has any experience with the above they can share, it would be greatly appreciated.

    However, I still think the original idea could work with an ordered dictionary, given that resets can be identified.

      could work with an ordered dictionary

      Perhaps you should take a look at cdb and CDB_File. If size is an issue, think about compressing each record separately before stuffing it into cdb.

      Also, think about using SQLite (i.e. DBI and DBD::SQLite).

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://879071]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (11)
As of 2014-04-18 18:45 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    April first is:







    Results (471 votes), past polls