Keep It Simple, Stupid | |
PerlMonks |
Array of Hashes vs. Array of Arrays vs Array of Lines vs ...by Anonymous Monk |
on Aug 30, 2007 at 14:24 UTC ( [id://636092]=perlquestion: print w/replies, xml ) | Need Help?? |
Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
I have to read in some CSV data from a flat data file, something like:
(many columns and many more rows than columns) The file has many columns and rows and is approximately 1gb in size. Each column has a different meaning. I am looking for the best way to keep the file in memory. At this time I have come up with the following: Array of hashes: This way mimicks an array of struct in C e.g., $z = $Data[$RecNo]{'Intensity'}; . I have heard that keeping many small hashes around is wasteful, however. And, given the regularity of the data, this seems sub-optimal. Hash of arrays: This way is backways from the Array of Hashes, but only uses as many hash entries as there are columns. $z = $Data{'Intensity'}[$RecNo]; use constant INTENSITY => 2; Array of small arrays: $z = $Data[$RecNo][INTENSITY]; Small array of large arrays: $z = $Data[INTENSITY]; <br>[$RecNo]; Alternative, I can keep each line in memory and split the line at the time of access (this means that scalars will hold larger chunks). Is there any sense of which of these methods are 'best' somehow? Any help would be appreciated. (Also, how much memory do small scalars take? A small array? Any help from someone with Devel::Size would be appreciated!)
Back to
Seekers of Perl Wisdom
|
|