Pathologically Eclectic Rubbish Lister | |
PerlMonks |
Re^5: Reading HUGE file multiple timesby BrowserUk (Patriarch) |
on Apr 28, 2013 at 13:45 UTC ( [id://1031069]=note: print w/replies, xml ) | Need Help?? |
I think the reason is it's writing data line as a hash name and the data line can have 300.000 characters. No, it's not. At least, if your description of the file is accurate it isn't. This bit of the code: $Library_Index{<$Library>} = tell(ARGV), reads the IDs and constructs the hash. And this bit: scalar <$Library> reads and discards the long data lines. However, Now I think I see the problem with your version of the code. This bit:until eof(); of the line iterates until the file is read, except that you forgot to put the filehandle $Library in the parens, so the program will never end because it is testing the end-of-file condition of a different file which will never be true. Change the line to:
And see how long it takes. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In Section
Seekers of Perl Wisdom
|
|