|laziness, impatience, and hubris|
Slurping BIG files into Hashesby Elgon (Curate)
|on Jun 18, 2003 at 17:44 UTC||Need Help??|
Elgon has asked for the wisdom of the Perl Monks concerning the following question:
I have a bit of query: I am trying to do some relatively straitforward file transformations as part of some testing we're doing on a project at work. The first stage of this involves reading in a big lookups file, circa 160,000 records ~3.4MB, and converting it into a hash.
The format of the file is basically records of twenty-one characters long, where the first thirteen characters are going to be the key for the next eight. I need to go through the file and read it into a hash for use as a lookup table. I am doing this thusly...
This may seem hugely inefficient, however the overhead of setting this up is offset by the fact that we'll be processing some 700 files each of about a megabyte in length. This is being done on a BIG box needless to say and memory is not going to be a problem.
But it seems to be taking about half an hour to do the initial processing. Is there a faster way to do it?
Thanks in advance...
Please, if this node offends you, re-read it. Think for a bit. I am almost certainly not trying to offend you. Remember - Please never take anything I do or say seriously.