I have a 5GB file that has identifiers lines followed by very long data lines (single lines in both cases). In a loop I get coordinates which tell me what identifier I need and what part of the corresponding data I need to extract and modify. The problem I have is that this loop goes through >1000 repetitions and reading the file each time is a dump idea. I was thinking about putting it into a hash but not sure about memory limitations. Any idea on how to tackle it? Speed is really an important factor. Maybe do a system call with qx and do a linux grep command? I have to get away from the computer for couple of hours so thanks in advance!
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
- a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
| & || & |
| < || < |
| > || > |
| [ || [ |
| ] || ] ||