Beefy Boxes and Bandwidth Generously Provided by pair Networks
go ahead... be a heretic
 
PerlMonks  

Re: Slurping a large (>65 gb) into buffer

by aquarium (Curate)
on Oct 01, 2007 at 15:03 UTC ( [id://641913]=note: print w/replies, xml ) Need Help??


in reply to Slurping a large (>65 gb) into buffer

pardon my ignorance...but why the need to slurp in clusters of html pages? Is that meant to increase efficiency somehow?..or is there some other requirement for this multi-page read?
the hardest line to type correctly is: stty erase ^H

Replies are listed 'Best First'.
Re^2: Slurping a large (>65 gb) into buffer
by downer (Monk) on Oct 01, 2007 at 16:16 UTC
    Yes, 65 GB. I am downloaded this data from a respected source, now I am trying to incrementally get as much as i can get into memory, process it, and get some more. I think setting the record separator will be useful.
      You didn't answer aquarium's question, which was: why do you plan to load more than a page at a time?
      Yes, 65 GB. I am downloaded this data from a respected source
      /me .oO( Hugh Hefner )

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://641913]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others surveying the Monastery: (6)
As of 2024-03-28 10:41 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found