Problems? Is your data what you think it is? | |
PerlMonks |
Re^2: Reading HUGE file multiple timesby Anonymous Monk |
on Apr 28, 2013 at 12:42 UTC ( [id://1031059]=note: print w/replies, xml ) | Need Help?? |
Hi there, Thanks for the tips. My data looks something like >ID Data (a verrry long string of varying length in a single line) >ID again Data again Indexing might be a good idea. Maybe I could only read the IDs (skipping the next line) and then when accessing just add +1 to the index? I need to extract them twice in a code in different subroutines and each time the subroutine specifies what to do with them. I don't know if it is a good idea to store it all in a hash. I only need to extract a fragment of the data in first read and the whole data entry in the other. I don't have the IDs in advance, the suroutine specifies which one I need and what to do with it. I've tried $Library_Index{<$Library>} = tell(ARGV), scalar <$Library> until eof();but it takes very long time to do. I wonder if there is a better way to do it since this would be a bottleneck.
In Section
Seekers of Perl Wisdom
|
|