http://www.perlmonks.org?node_id=1034733


in reply to Re^2: IO::Uncompress::Gunzip to scalar takes hours (on windows)
in thread IO::Uncompress::Gunzip to scalar takes hours (on windows)

So, that shouldn't be taxing your memory too much.

I couldn't reproduce your problem. I ran your program verbatim on my windows system running Vista64, (AS)5.10.1 and IO::Compress:2.060 and on files 65/210MB and 80/600MB and they both took roughly 5 seconds to disk and 3 seconds in memory.

Does the same problem exhibit on all files when decompressed to memory or is it confined to one particular file?

One possibility (mentioned by Corion) is indicate by excessive page faults for the process(*). If the process shows a page fault delta greater than double digits per second, you have probably encountered the malloc problem. But if that were the case, I would have expected to be able to reproduce it here on my standard AS install.

(*You'll need Process Explorer or work out how to use perfmon.exe to find this information.)


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Re^3: IO::Uncompress::Gunzip to scalar takes hours (on windows)

Replies are listed 'Best First'.
Re^4: IO::Uncompress::Gunzip to scalar takes hours (on windows)
by cmv (Chaplain) on May 22, 2013 at 15:25 UTC
    The same problem on all files.

    I've updated the original post to have profile data between a good run and bad run - maybe that will help.

    It seems to be dawning on me, that the problem has more to do with the old AS 5.8.9 that I'm using rather than with the IO::Uncompress::Gunzip module. However updating will take lots of testing. Not sure which is the worst of the two evils.

    Thanks for the help! ++BrowserUk

      I've updated the original post to have profile data between a good run and bad run - maybe that will help.

      Hm. That tells us that the vast majority of the time (98%+) in the slow version is spent in Compress::Raw::Zlib::inflateStream::inflate

      Which doesn't make a whole lot of sense given that they are inflating the same data from the same place in both cases.

      A line profile might shed more light; but it seems doubtful given the time is spent doing what is effectively input; but the difference in the code is the output target.

      Again, I urge you to obtain the page fault stats.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        You are correct! I've updated the original post with the pagefault data, and it looks like this is the problem.

        Do you have a suggestion on the easiest/fastest way for me to fix this (I'd like to avoid upgrading the AS perl if possible - lots of retesting needed for this)?

        Can I do something programmatically? I've tried Corion's suggestion, but that didn't seem to work. Maybe I'm not doing it quite correctly...