Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

How to bucket an Hash

by techman2006 (Beadle)
on Nov 27, 2013 at 06:11 UTC ( #1064556=perlquestion: print w/ replies, xml ) Need Help??
techman2006 has asked for the wisdom of the Perl Monks concerning the following question:

I have a requirement where I need to bucket a hash and then pass this smaller hash to an thread to perform some job.

Below is a sample code which I have written which can create an array of hashes which can be used later to pass on to each threads.

#!/usr/bin/perl # use strict; use warnings; use integer; use Data::Dumper; my %hash = ( 1 => 'a', 2 => 'b', 3 => 'c', 4 => 'd' ); my $size = keys %hash; print "Total keys are $size\n"; my $numSplit = 2; my $diffCount = $size / $numSplit; my @keys = keys %hash; my @arrHash; my %hash2; for my $i (0 .. $diffCount - 1) { my $start = $i * $diffCount; my $end = $i * $diffCount + $diffCount - 1; my %hash1; for my $j($start .. $end) { my $key = $keys[$j]; my $value = $hash{$keys[$j]}; $hash1{$key} = $value; } print Dumper(\%hash1); push(@arrHash, \%hash1); } print Dumper(\@arrHash);

But I am looking for more efficient way to do as the total number of keys can be anywhere from 2500 to 10000.

Comment on How to bucket an Hash
Download Code
Re: How to bucket an Hash
by hdb (Prior) on Nov 27, 2013 at 06:30 UTC

    Splice'n Slice!

    use strict; use warnings; use integer; use Data::Dumper; my %hash = ( 1 => 'a', 2 => 'b', 3 => 'c', 4 => 'd' ); my $size = keys %hash; print "Total keys are $size.\n"; my $numSplit = 2; my $partSize = $size/$numSplit; my @keys = keys %hash; my @arrHash; while(my @keys2 = splice @keys, 0, $partSize ) { my %hash1; @hash1{@keys2} = @hash{@keys2}; push @arrHash, \%hash1; } print Dumper(\@arrHash);
Re: How to bucket a Hash
by Athanasius (Monsignor) on Nov 27, 2013 at 06:56 UTC

    Hello techman2006,

    Your question implies that the code shown works correctly. But what happens if you have an odd number of elements in the hash? Running your code with %hash = ( 1 => 'a', 2 => 'b', 3 => 'c', 4 => 'd', 5 => 'e' ) I get:

    16:51 >perl 786_SoPW.pl Total keys are 5 $VAR1 = { '1' => 'a', '5' => 'e' }; $VAR1 = { '4' => 'd', '3' => 'c' }; $VAR1 = [ { '1' => 'a', '5' => 'e' }, { '4' => 'd', '3' => 'c' } ]; 16:51 >

    which shows that the fifth element is left out altogether.

    Anyway, here is my solution, a variation on hdb’s approach which uses the natatime function from List::MoreUtils:

    #! perl use strict; use warnings; use Data::Dump qw( pp ); use List::MoreUtils qw( natatime ); my $i = 1; my %hash = map { $i++ => $_ } 'a' .. 'e'; print 'Initial hash: ', pp(\%hash), "\n"; my $it = natatime 2, sort { $a <=> $b } keys %hash; my @array_of_hashes; while (my @keys = $it->()) { my %new_hash = map { ( $_, $hash{$_} ) } @keys; push @array_of_hashes, \%new_hash; } print 'Array of hashes: ', pp(\@array_of_hashes), "\n";

    Update: Changed 'a' .. 'j' to 'a' .. 'e', and the first argument to natatime from 4 to 2, to match the output shown.

    Output:

    16:54 >perl 786_SoPW.pl Initial hash: { 1 => "a", 2 => "b", 3 => "c", 4 => "d", 5 => "e" } Array of hashes: [{ 1 => "a", 2 => "b" }, { 3 => "c", 4 => "d" }, { 5 +=> "e" }] 16:55 >

    Hope that helps,

    Athanasius <°(((><contra mundum Iustus alius egestas vitae, eros Piratica,

      That was a test program and it didn't take care of the all the cases :). Thanks for fixing that issue.

      As I was looking to different solutions which can be used while assigning jobs to multiple threads. As I want a fixed size queue is assigned to each thread. As the job items are present in a HASH, I was looking a method through which this slicing can be done on a faster way as the total keys can be high.

        Why do you want a fixed work unit size for each thread? Given the vagaries of multitasking, some thread is sure to finish a fixed length task before another doing the same size task. Why not just pour all your data into a Thread::Queue and have a pool of workers servicing this queue? If you have a very large amount of data it can also be handy to have a size limited queue, BrowserUK gives a great example here: Re^5: dynamic number of threads based on CPU utilization.


        Cheers,
        R.

        Pereant, qui ante nos nostra dixerunt!
Re: How to bucket an Hash
by BrowserUk (Pope) on Nov 27, 2013 at 10:30 UTC

    What does the hash actually contain?


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      The hash contains list of directories which need to be archived.

      Also there is a constrains about how many threads we can use for a process.

      Below is small use case for which I am trying to solve the problem using threads.

      1. Scan a directory to find how many directory we have to archive.
      2. Now make a entry in DB to keep track of files we are looking forward to work on.
      3. Now archive directory. Once the operation is successful update DB with that set of files.

      Now I am trying to multi-thread where each thread will work on a fixed size set.

      One of the bottleneck I see the DB handler which I think can't be shared across threads. I think that is limitation of DBI. So any thought to over come this bottle neck will be great.

        The hash contains list of directories which need to be archived.

        If it is a list, why is it in a hash rather than an array?


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
        Now I am trying to multi-thread where each thread will work on a fixed size set.

        As Random_walk pointed out, splitting work in equal-sized chunks for threading is not a good strategy.

        Some of your directories will contain less files than others, and some will contain only small files and others large files; so it is easy to see that some of your threads will finish more quickly than others which doesn't distribute the workload evenly.

        A better approach would be to make the hash shared, and then queue the keys to the hash (the directory names) to a Thread::Queue and let the threads pick their work directories from there. Thus, the threads becomes self balancing.

        That said, I have very profound doubts that threading your application will have any great benefit to your throughput if the directories you are backing up exist on a single physical volume. The problem is that if you have multiple threads (or processes) reading files concurrently from the same physical drive, you will likely create severe head-thrash and so, slow the overall throughput rather than increase it.

        Even if your files are distributed across multiple spindles -- with SAS or raid or similar -- it is still dubious whether you will achieve huge benefits unless you could isolate the location of your files so that you could ensure that only one file from each physical unit was being read at any given time. Mostly, this is not possible as these multi-spindle setups tend to split files across multiple physical volumes transparently to the file system.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://1064556]
Approved by hdb
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others avoiding work at the Monastery: (10)
As of 2014-12-21 16:06 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    Is guessing a good strategy for surviving in the IT business?





    Results (106 votes), past polls