Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re^3: Hash of Hashes from file

by scorpio17 (Monsignor)
on Apr 03, 2012 at 13:36 UTC ( #963247=note: print w/ replies, xml ) Need Help??


in reply to Re^2: Hash of Hashes from file
in thread Hash of Hashes from file

Well, the problem is that with a hash, you can only have one value for each key. If you need to associate multiple values per key, then the solution is to store the values in an array, and save the array reference in the hash. If you get to the point where you have more data than you can fit into memory at one time, then you need to look at using a database, like mysql. Instead of pushing the data from each line of the file into your hash, you would insert it into the database, then once all the data is loaded you can query the database.


Comment on Re^3: Hash of Hashes from file
Re^4: Hash of Hashes from file
by cipher (Acolyte) on Apr 03, 2012 at 14:31 UTC
    Yes hashes have unique keys, Is it possible to generate a hash like this:
    %hoh=( $user => { 'Website' => [website1,website2,website3], 'type' => [type +1,type2,type3]} );
    As I said I am not familiar with hashes, just checking.
      Yes, that should work. Then to add a new website/type, you could do this:
      push( @{ $hoh{$user}{'Website'} }, 'website4'); push( @{ $hoh{$user}{'type'} }, 'type4'};

      and to get all websites for a given user:

      my @websites = @{ $hoh{$user}{'Website'} };

      The syntax looks strange because you're storing an array ref, and have to dereference it.

        Thanks a lot,

        Can you also let me know how do I print all websites and types for each user ?

      Yes, it should be possible but the syntax might get sticky. As scorpio17 says in this thread, what you want to do with the data is one the the deciding factors in how you want to store it. (Along with other factors). If the data is too large to load into memory, you may need to consider some of the suggestions offered by others here.

      If the requirement is just to produce output like you provided, a Hash of Hashes may be a good choice.

      #!/usr/bin/perl use strict; use warnings; my %data; while (<DATA>) { my ($user, $site, $cat) = /"([^"]+)"/g; $data{$user}{$site} = $cat; } for my $user (keys %data) { my $href = $data{$user}; print $user, "\n"; print "\tWebsite: $_, Category: $href->{$_}\n" for keys %$href; } __DATA__ user="john" website="www.yahoo.com" type="Entertainment" user="david" website="www.facebook.com" type="Social Networking" user="john" website="www.facebook.com" type="Social Networking" user="mike" website="www.google.com" type="Search Engines"
      Output was:
      john Website: www.yahoo.com, Category: Entertainment Website: www.facebook.com, Category: Social Networking mike Website: www.google.com, Category: Search Engines david Website: www.facebook.com, Category: Social Networking
      Update: With a file this large, would it be likely for one user to visit the same website more than once?
        Yes users visit same websites multiple times. For this reason I added Key as string "Website" and value as the actual website.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://963247]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (8)
As of 2014-12-25 00:52 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (159 votes), past polls