There's more than one way to do things | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Since the file is pretty large, I'd like the piped input to be read into an array and not a scalar. A 1GB, 16e6 line file loaded into a scalar requires 1GB + ~48 bytes and loads in a couple of seconds. That same file loaded into an array requires just under 2GB (and uses over 4.5GB in the process of building it!); and takes much longer to load. This is because you still have to load the 1GB of data, but you also have to allocate memory for the 16e6 scalars; and the array to hold them; and the intermediate arrays that are disgarded during the process. If you are trying to conserve memory, don't load it into an array and then process it line by line. Just process each line as you read it, and then discard it before reading the next one. Reading and processing line-by-line this way uses a few kB only. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In reply to Re: Using piped I/O?
by BrowserUk
|
|