Problems? Is your data what you think it is? | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
OK, I have searched "Super Search" for an answer to my issue, and this is what I have gleaned. I have a process that looks through an extracted | delimited flat file via a while (FILEHANDLE) for a user id. When I find an id, I split it based on the pipe, examine the data, and based on values, move it into a specific keyed hash to be processed in another part of the program. My issue is that the flat files are enormous (well for me) at +250 MBs. Is there a way that I can tell perl not to rescan the file each time the while loop goes through?
Is there anyway to tell perl, "hey, you have already gone through x lines, don't start over at the beginning of the file when you loop back through? In reply to Search Efficiency by treebeard
|
|