Beefy Boxes and Bandwidth Generously Provided by pair Networks
"be consistent"
 
PerlMonks  

Re^2: Disk based hash (as opposed to RAM based)

by thomas895 (Hermit)
on Oct 07, 2012 at 20:44 UTC ( #997719=note: print w/ replies, xml ) Need Help??


in reply to Re: Disk based hash (as opposed to RAM based)
in thread Disk based hash (as opposed to RAM based)

One way around the limitation would be to store your array values as joined strings:
That works for simple strings, but once you get to more diverse strings, you should probably consider a serializer. I find that JSON::XS works great for this purpose, as does Data::Dumper with an eval. The latter method may not always be the best choice, though. JSON is more portable and can be read by another non-Perl application more easily.
For example:

# Using JSON::XS use strict; use warnings; use DB_File; use JSON::XS qw( encode_json ); my %hash; unlink "tempfile"; #Remove previous file, if any tie %hash, "DB_File", "tempfile", O_RDWR | O_CREAT, 0666, $DB_HASH or die "Cannot open file 'tempfile': $!\n"; while ( $sourceString =~ /example(key)regex(value)example\b/ig ) { my $key = $1; my $value = $2; my $string_to_insert = encode_json( [ $key, $value ] ); push( @{ $hash{$key} }, $string_to_insert ); #Push the value in +to the hash } #And then use decode_json() from JSON::XS somewhere else to get an arr +ay of your values.

HTH

~Thomas~
confess( "I offer no guarantees on my code." );


Comment on Re^2: Disk based hash (as opposed to RAM based)
Select or Download Code
Re^3: Disk based hash (as opposed to RAM based)
by BrowserUk (Pope) on Oct 07, 2012 at 21:42 UTC

    I realise that you are trying to be helpful; but I do not think you have thought this through.

    • Firstly, the OP clearly states Hash of Arrays.

      Hence, catering for anything more is overkill.

    • More importantly, the OPs code clearly shows that he needs to build up the arrays piecemeal -- ie. value by value.

      If he were to use a serialiser module for this, he would need to deserialise the current state of the appropriate array; add the latest new element; and then re-serialise; for each line in the file. Which would be horribly slow no matter which of the serialiser alternatives he used.

      The only other alternative would be to wait until each array was complete in memory before serialising and adding to DB_File, but that would mean waiting until the entire file had been read, and thus, the entire structure would be required to be hend in memory before serialisation could be performed. And if he had the memory to do that, he wouldn't be looking to use a disk-based hash.

    For a one-off process, he might consider pre-sorting the input file by the key field, so that the contents of each (sub) array could be built up in memory before being serialised once, but for that to be a viable option requires a whole set of circumstances that are not in evidence from the OP.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

    RIP Neil Armstrong

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://997719]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others romping around the Monastery: (9)
As of 2014-07-31 21:23 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (253 votes), past polls