Pathologically Eclectic Rubbish Lister | |
PerlMonks |
Re: Hash of Arrays - Pushing Array references and memory impacts questionby Marshall (Canon) |
on Oct 31, 2011 at 16:40 UTC ( [id://934918]=note: print w/replies, xml ) | Need Help?? |
Another possibility in a different direction is to consider using SQlite. All you have to do install DBD:SQLite, then just "use DBI;". DBI will figure out what to do from the connect statement. SQLite avoids all the account setup and admin headaches of a traditional SQL server - it stores the data as just a single regular file and their are no "accounts". I've found the performance to be very good and this solution scales easily. One nice feature is that the amount of cache that it uses can be varied dynamically. I run it way up to speed up indexing operations and then run it back down for normal operation. I don't know enough about your application to say for sure that this is a good idea for you or not. But this has become my "go to" solution for disk resident DB. It supports a big subset of SQL, but you can use it in a simple way without having to become an SQL guru. Maybe you just have a single un-normalized table and index one column as the "key". Update: this idea would be appropriate if it helped somehow in the processing of this huge hash, if you had to search for stuff that would be part of the "values" to the keys? If the job is just a matter of retrieving the set of data associated with a single key, I would think that browserUk's idea of making the "data" a single string instead of a reference to an array of strings would make a lot of sense. This also reduces memory requirements somewhat as a single string takes less memory than an array of strings.
In Section
Seekers of Perl Wisdom
|
|