Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^6: IO::Uncompress::Gunzip to scalar takes hours (on windows)

by cmv (Chaplain)
on May 22, 2013 at 18:04 UTC ( [id://1034797]=note: print w/replies, xml ) Need Help??


in reply to Re^5: IO::Uncompress::Gunzip to scalar takes hours (on windows)
in thread IO::Uncompress::Gunzip to scalar takes hours (on windows)

You are correct! I've updated the original post with the pagefault data, and it looks like this is the problem.

Do you have a suggestion on the easiest/fastest way for me to fix this (I'd like to avoid upgrading the AS perl if possible - lots of retesting needed for this)?

Can I do something programmatically? I've tried Corion's suggestion, but that didn't seem to work. Maybe I'm not doing it quite correctly...

  • Comment on Re^6: IO::Uncompress::Gunzip to scalar takes hours (on windows)

Replies are listed 'Best First'.
Re^7: IO::Uncompress::Gunzip to scalar takes hours (on windows)
by BrowserUk (Patriarch) on May 22, 2013 at 19:25 UTC
    Do you have a suggestion on the easiest/fastest way for me to fix this (I'd like to avoid upgrading the AS perl if possible

    The simplest way would be to get hold of the AS sources for that version of perl, make teh one line patch I posted somewhere on this site. I'd look it up, but unless you are prepared to pay for it, I don't think the AS sources for 5.8.9 are available any longer. You might then be able to replace Perl.exe/perl5.8.9.dll into your existing distribution and have the problem go away. I don;t guarentee it, but it ought to work.

    Can I do something programmatically?

    I seem to recall before working out the patch, I came up with a scheme that seemed to mitigate the problem for the most part.

    That involved pre-allocating memory to cover the final size of the growth pattern that is causing the page faults -- in smallish chunks so that it gets retained by the process pool when freed, rather than in one chunk which would be returned to the OS when freed. From memory, this pre-stuffing of the memory pool avoid many of the page faults, at least some times.

    Something like:

    ## prior to the main memory consuming process. my @dummy; $dummy[ $_ ] .= chr(0) x 4096 for 0 .. 256 * 1024; undef @dummy; ## Add 1GB worth of pages to the memory pool.

    Worth a try, but I can't find any examples of from that far back and cannot verify it.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1034797]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (4)
As of 2024-04-16 04:47 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found