Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re: Parallelization of multiple nested loops

by marioroy (Priest)
on Feb 09, 2018 at 13:46 UTC ( #1208826=note: print w/replies, xml ) Need Help??


in reply to Parallelization of multiple nested loops

Hi biosub,

I tried again with 2nd attempt after not liking my initial attempt. The demonstrations that follow run on machines with 32 GiB of RAM, minimally.

6 workers: 3.2x faster than Algorithm::Combinatorics on machines with 6 real-cores

use strict; use warnings; # Run on UNIX machines with 32+ GiB of RAM. # Otherwise, remove the -use_dev_shm argument. # Beware, consumes 14 GiB in temp dir. use MCE::Signal qw[ $tmp_dir -use_dev_shm ]; use Time::HiRes qw[ time ]; use MCE::Loop; use MCE::Shared; die "Not UNIX OS\n" if $^O eq 'MSWin32'; # usage: script.pl > out my $start = time; my $c_shared = MCE::Shared->scalar(0); MCE::Loop::init { max_workers => 6, chunk_size => 1 }; # loop through desired combinations mce_loop_s { my ($p0,$c) = ($_,0); my ($p1,$p2,$p3,$p4,$p5,$p6,$p7,$p8,$p9,$p10); open my $fh, ">", "$tmp_dir/".MCE->chunk_id(); for ($p1=0; $p1<=1; $p1+=0.2){ for ($p2=0; $p2<=1; $p2+=0.2){ for ($p3=0; $p3<=1; $p3+=0.2){ for ($p4=0; $p4<=1; $p4+=0.2){ for ($p5=0; $p5<=1; $p5+=0.2){ for ($p6=0; $p6<=1; $p6+=0.2){ for ($p7=0; $p7<=1; $p7+=0.2){ for ($p8=0; $p8<=1; $p8+=0.2){ for ($p9=0; $p9<=1; $p9+=0.2){ for ($p10=0; $p10<=1; $p10+=0. +2){ #------------- print $fh "$p0\t$p1\t$p2\t +$p3\t$p4\t$p5\t$p6\t$p7\t$p8\t$p9\t$p10\t1\t1\n"; ++$c; #------------- } } } } } } } } } } close $fh; $c_shared->incrby($c); } 0, 1, 0.2; # p0: seq_beg, seq_end, seq_step MCE::Loop::finish(); system("cat $tmp_dir/[1-6]; rm -fr $tmp_dir"); printf STDERR "Took: %0.3f seconds [%ld]\n", time() - $start, $c_share +d->get();

36 workers: use-case for a 64-way box (32 real-cores + 32 hyper-threads)

This involves nested parallel loops, possible using MCE. The shared-counter variable increments fine no matter how many levels deep. Locking is handled automatically via the OO interface.

use strict; use warnings; # Run on UNIX machines with 32+ GiB of RAM. # Otherwise, remove the -use_dev_shm argument. # Beware, consumes 14 GiB in temp dir. use MCE::Signal qw[ $tmp_dir -use_dev_shm ]; use Time::HiRes qw[ time ]; use MCE::Loop; use MCE::Shared; die "Not UNIX OS\n" if $^O eq 'MSWin32'; # usage: script.pl > out my $start = time; my $c_shared = MCE::Shared->scalar(0); MCE::Loop::init { max_workers => 6, chunk_size => 1 }; # loop through desired combinations mce_loop_s { my $p0 = $_; MCE::Loop::init { max_workers => 6, chunk_size => 1 }; $tmp_dir .= "/".MCE->chunk_id(); mkdir $tmp_dir; mce_loop_s { my ($p1,$c) = ($_,0); my ($p2,$p3,$p4,$p5,$p6,$p7,$p8,$p9,$p10); open my $fh, ">", "$tmp_dir/".MCE->chunk_id(); for ($p2=0; $p2<=1; $p2+=0.2){ for ($p3=0; $p3<=1; $p3+=0.2){ for ($p4=0; $p4<=1; $p4+=0.2){ for ($p5=0; $p5<=1; $p5+=0.2){ for ($p6=0; $p6<=1; $p6+=0.2){ for ($p7=0; $p7<=1; $p7+=0.2){ for ($p8=0; $p8<=1; $p8+=0.2){ for ($p9=0; $p9<=1; $p9+=0.2){ for ($p10=0; $p10<=1; $p10+=0. +2){ #------------- print $fh "$p0\t$p1\t$p2\t +$p3\t$p4\t$p5\t$p6\t$p7\t$p8\t$p9\t$p10\t1\t1\n"; ++$c; #------------- } } } } } } } } } close $fh; $c_shared->incrby($c); } 0, 1, 0.2; # p1: seq_beg, seq_end, seq_step MCE::Loop::finish(); } 0, 1, 0.2; # p0: seq_beg, seq_end, seq_step MCE::Loop::finish(); system("cat $tmp_dir/$_/[1-6]; rm -fr $tmp_dir/$_") for 1..6; printf STDERR "Took: %0.3f seconds [%ld]\n", time() - $start, $c_share +d->get();

Results: taken from a 4.2 GHz machine with 8 real-cores, hyper-threads disabled

Combinatorics : 459.752 seconds 6 workers : 145.695 seconds 36 workers : 109.134 seconds <- my CPU has 8 cores

Consuming 32 real-cores and a little more is possible on a 64-way box. Afterwards, one may use MCE or a parallel module of choice to process the output file in parallel.

Disclaimer: My Linux box is tuned to 4.2 GHz on all 8 cores. This is not common. What to take from this is that nested parallel loops is possible with care. On Linux, /dev/shm is beneficial for temporary storage.

Regards, Mario

Replies are listed 'Best First'.
Re^2: Parallelization of multiple nested loops
by marioroy (Priest) on Feb 10, 2018 at 07:04 UTC

    I tried repetition using Inline C. To ensure Inline C does not clobber file handles with MCE's IPC handles, I open file handles in Perl and pass file descriptors to C.

    6 workers: 6.2x faster than Algorithm::Combinatorics on machines with 6 real-cores

    use strict; use warnings; die "Not UNIX OS\n" if $^O eq 'MSWin32'; # usage: script.pl > out use Inline 'C' => Config => CCFLAGSEX => '-O2'; use Inline 'C' => <<'END_C'; #include <stdio.h> unsigned long c_repetition (int fd, float p0) { unsigned long c = 0; FILE *stream = fdopen(fd, "wb"); float p1,p2,p3,p4,p5,p6,p7,p8,p9,p10; for (p1=0; p1<=1; p1+=0.2){ for (p2=0; p2<=1; p2+=0.2){ for (p3=0; p3<=1; p3+=0.2){ for (p4=0; p4<=1; p4+=0.2){ for (p5=0; p5<=1; p5+=0.2){ for (p6=0; p6<=1; p6+=0.2){ for (p7=0; p7<=1; p7+=0.2){ for (p8=0; p8<=1; p8+=0.2){ for (p9=0; p9<=1; p9+=0.2){ for (p10=0; p10<=1; p10+=0.2){ //------------- fprintf(stream, "%0.1f\t%0 +.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t1 +.0\t1.0\n", p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10); c++; //------------- } } } } } } } } } } fflush(stream); fclose(stream); return c; } END_C # Run on UNIX machines with 48+ GiB of RAM. # Otherwise, remove the -use_dev_shm argument. # Beware, consumes 18 GiB in temp dir. use MCE::Signal qw[ $tmp_dir -use_dev_shm ]; use Time::HiRes qw[ time ]; use MCE::Loop; use MCE::Shared; my $start = time; my $c_shared = MCE::Shared->scalar(0); MCE::Loop::init { max_workers => 6, chunk_size => 1 }; # loop through desired combinations mce_loop_s { my $p0 = $_; open my $fh, ">", "$tmp_dir/".MCE->chunk_id(); my $c = c_repetition(fileno($fh), $p0); close $fh; $c_shared->incrby($c); } 0.0, 1.0, 0.2, '%0.1f'; # p0: seq_beg, seq_end, seq_step, format MCE::Loop::finish(); system("cat $tmp_dir/[1-6]; rm -fr $tmp_dir"); printf STDERR "Took: %0.3f seconds [%ld]\n", time() - $start, $c_share +d->get();

    36 workers: use-case for a 64-way box (32 real-cores + 32 hyper-threads)

    This involves nested parallel loops, possible using MCE. The shared-counter variable increments fine no matter how many levels deep. Locking is handled automatically via the OO interface.

    use strict; use warnings; die "Not UNIX OS\n" if $^O eq 'MSWin32'; # usage: script.pl > out use Inline 'C' => Config => CCFLAGSEX => '-O2'; use Inline 'C' => <<'END_C'; #include <stdio.h> unsigned long c_repetition (int fd, float p0, float p1) { unsigned long c = 0; FILE *stream = fdopen(fd, "wb"); float p2,p3,p4,p5,p6,p7,p8,p9,p10; for (p2=0; p2<=1; p2+=0.2){ for (p3=0; p3<=1; p3+=0.2){ for (p4=0; p4<=1; p4+=0.2){ for (p5=0; p5<=1; p5+=0.2){ for (p6=0; p6<=1; p6+=0.2){ for (p7=0; p7<=1; p7+=0.2){ for (p8=0; p8<=1; p8+=0.2){ for (p9=0; p9<=1; p9+=0.2){ for (p10=0; p10<=1; p10+=0.2){ //------------- fprintf(stream, "%0.1f\t%0.1f\ +t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t%0.1f\t1.0\t +1.0\n", p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,p10); c++; //------------- } } } } } } } } } fflush(stream); fclose(stream); return c; } END_C # Run on UNIX machines with 48+ GiB of RAM. # Otherwise, remove the -use_dev_shm argument. # Beware, consumes 18 GiB in temp dir. use MCE::Signal qw[ $tmp_dir -use_dev_shm ]; use Time::HiRes qw[ time ]; use MCE::Loop; use MCE::Shared; my $start = time; my $c_shared = MCE::Shared->scalar(0); MCE::Loop::init { max_workers => 6, chunk_size => 1 }; # loop through desired combinations mce_loop_s { my $p0 = $_; MCE::Loop::init { max_workers => 6, chunk_size => 1 }; $tmp_dir .= "/".MCE->chunk_id(); mkdir $tmp_dir; mce_loop_s { my $p1 = $_; open my $fh, ">", "$tmp_dir/".MCE->chunk_id(); my $c = c_repetition(fileno($fh), $p0, $p1); close $fh; $c_shared->incrby($c); } 0.0, 1.0, 0.2, '%0.1f'; # p1: seq_beg, seq_end, seq_step, format MCE::Loop::finish(); } 0.0, 1.0, 0.2, '%0.1f'; # p0: seq_beg, seq_end, seq_step, format MCE::Loop::finish(); system("cat $tmp_dir/$_/[1-6]; rm -fr $tmp_dir/$_") for 1..6; printf STDERR "Took: %0.3f seconds [%ld]\n", time() - $start, $c_share +d->get();

    Results: taken from a 4.2 GHz machine with 8 real-cores, hyper-threads disabled

    Combinatorics : 459.752 seconds 6 workers : 74.394 seconds 36 workers : 58.021 seconds <- my CPU has 8 cores

    Consuming 32 real-cores and a little more is possible on a 64-way box. Afterwards, one may use MCE or a parallel module of choice to process the output file in parallel.

    Disclaimer: My Linux box is tuned to 4.2 GHz on all 8 cores. This is not common. What to take from this is that nested parallel loops is possible with care. On Linux, /dev/shm is beneficial for temporary storage.

    Regards, Mario

      I tried again by removing the overhead associated with fprintf. To ensure Inline C does not clobber file handles with MCE's IPC handles, I open file handles in Perl and pass file descriptors to C.

      6 workers: 25x faster than Algorithm::Combinatorics on machines with 6 real-cores

      use strict; use warnings; die "Not UNIX OS\n" if $^O eq 'MSWin32'; # usage: script.pl > out use Inline 'C' => Config => CCFLAGSEX => '-O2'; use Inline 'C' => <<'END_C'; #include <stdio.h> void c_fput_float(float value, char c, FILE *stream) { static char buf[] = "0.0\n"; int whole = (int) value; int frac = (int) ((value - whole) * 10); buf[0] = '0' + whole; buf[2] = '0' + frac; buf[3] = c; fputs(buf, stream); } unsigned long c_repetition (int fd, float p0) { unsigned long count = 0; FILE *stream = fdopen(fd, "wb"); float p1,p2,p3,p4,p5,p6,p7,p8,p9,p10; for (p1=0; p1<=1; p1+=0.2) { for (p2=0; p2<=1; p2+=0.2) { for (p3=0; p3<=1; p3+=0.2) { for (p4=0; p4<=1; p4+=0.2) { for (p5=0; p5<=1; p5+=0.2) { for (p6=0; p6<=1; p6+=0.2) { for (p7=0; p7<=1; p7+=0.2) { for (p8=0; p8<=1; p8+=0.2) { for (p9=0; p9<=1; p9+=0.2) { for (p10=0; p10<=1; p10+=0.2) { c_fput_float(p0, '\t', stream); c_fput_float(p1, '\t', stream); c_fput_float(p2, '\t', stream); c_fput_float(p3, '\t', stream); c_fput_float(p4, '\t', stream); c_fput_float(p5, '\t', stream); c_fput_float(p6, '\t', stream); c_fput_float(p7, '\t', stream); c_fput_float(p8, '\t', stream); c_fput_float(p9, '\t', stream); c_fput_float(p10, '\t', stream); c_fput_float(1.0, '\t', stream); c_fput_float(1.0, '\n', stream); count++; } } } } } } } } } } fflush(stream); fclose(stream); return count; } END_C # Run on UNIX machines with 48+ GiB of RAM. # Otherwise, remove the -use_dev_shm argument. # Beware, consumes 18 GiB in temp dir. use MCE::Signal qw[ $tmp_dir -use_dev_shm ]; use Time::HiRes qw[ time ]; use MCE::Loop; use MCE::Shared; my $start = time; my $c_shared = MCE::Shared->scalar(0); MCE::Loop::init { max_workers => 6, chunk_size => 1 }; # loop through desired combinations mce_loop_s { my $p0 = $_; open my $fh, ">", "$tmp_dir/".MCE->chunk_id(); my $c = c_repetition(fileno($fh), $p0); close $fh; $c_shared->incrby($c); } 0.0, 1.0, 0.2, '%0.1f'; # p0: seq_beg, seq_end, seq_step, format MCE::Loop::finish(); system("cat $tmp_dir/[1-6]; rm -fr $tmp_dir"); printf STDERR "Took: %0.3f seconds [%ld]\n", time() - $start, $c_share +d->get();

      36 workers: use-case for a 64-way box (32 real-cores + 32 hyper-threads)

      This involves nested parallel loops, possible using MCE. The shared-counter variable increments fine no matter how many levels deep. Locking is handled automatically via the OO interface.

      use strict; use warnings; die "Not UNIX OS\n" if $^O eq 'MSWin32'; # usage: script.pl > out use Inline 'C' => Config => CCFLAGSEX => '-O2'; use Inline 'C' => <<'END_C'; #include <stdio.h> void c_fput_float(float value, char c, FILE *stream) { static char buf[] = "0.0\n"; int whole = (int) value; int frac = (int) ((value - whole) * 10); buf[0] = '0' + whole; buf[2] = '0' + frac; buf[3] = c; fputs(buf, stream); } unsigned long c_repetition (int fd, float p0, float p1) { unsigned long count = 0; FILE *stream = fdopen(fd, "wb"); float p2,p3,p4,p5,p6,p7,p8,p9,p10; for (p2=0; p2<=1; p2+=0.2) { for (p3=0; p3<=1; p3+=0.2) { for (p4=0; p4<=1; p4+=0.2) { for (p5=0; p5<=1; p5+=0.2) { for (p6=0; p6<=1; p6+=0.2) { for (p7=0; p7<=1; p7+=0.2) { for (p8=0; p8<=1; p8+=0.2) { for (p9=0; p9<=1; p9+=0.2) { for (p10=0; p10<=1; p10+=0.2) { c_fput_float(p0, '\t', stream); c_fput_float(p1, '\t', stream); c_fput_float(p2, '\t', stream); c_fput_float(p3, '\t', stream); c_fput_float(p4, '\t', stream); c_fput_float(p5, '\t', stream); c_fput_float(p6, '\t', stream); c_fput_float(p7, '\t', stream); c_fput_float(p8, '\t', stream); c_fput_float(p9, '\t', stream); c_fput_float(p10, '\t', stream); c_fput_float(1.0, '\t', stream); c_fput_float(1.0, '\n', stream); count++; } } } } } } } } } fflush(stream); fclose(stream); return count; } END_C # Run on UNIX machines with 48+ GiB of RAM. # Otherwise, remove the -use_dev_shm argument. # Beware, consumes 18 GiB in temp dir. use MCE::Signal qw[ $tmp_dir -use_dev_shm ]; use Time::HiRes qw[ time ]; use MCE::Loop; use MCE::Shared; my $start = time; my $c_shared = MCE::Shared->scalar(0); MCE::Loop::init { max_workers => 6, chunk_size => 1 }; # loop through desired combinations mce_loop_s { my $p0 = $_; MCE::Loop::init { max_workers => 6, chunk_size => 1 }; $tmp_dir .= "/".MCE->chunk_id(); mkdir $tmp_dir; mce_loop_s { my $p1 = $_; open my $fh, ">", "$tmp_dir/".MCE->chunk_id(); my $c = c_repetition(fileno($fh), $p0, $p1); close $fh; $c_shared->incrby($c); } 0.0, 1.0, 0.2, '%0.1f'; # p1: seq_beg, seq_end, seq_step, format MCE::Loop::finish(); } 0.0, 1.0, 0.2, '%0.1f'; # p0: seq_beg, seq_end, seq_step, format MCE::Loop::finish(); system("cat $tmp_dir/$_/[1-6]; rm -fr $tmp_dir/$_") for 1..6; printf STDERR "Took: %0.3f seconds [%ld]\n", time() - $start, $c_share +d->get();

      Results: taken from a 4.2 GHz machine with 8 real-cores, hyper-threads disabled

      Combinatorics : 459.752 seconds 6 workers : 18.420 seconds 36 workers : 15.593 seconds <- my CPU has 8 cores

      Consuming 32 real-cores and a little more is possible on a 64-way box. Afterwards, one may use MCE or a parallel module of choice to process the output file in parallel.

      Disclaimer: My Linux box is tuned to 4.2 GHz on all 8 cores. This is not common. What to take from this is that nested parallel loops is possible with care. On Linux, /dev/shm is beneficial for temporary storage.

      Regards, Mario

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1208826]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (7)
As of 2018-11-13 22:01 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    My code is most likely broken because:
















    Results (159 votes). Check out past polls.

    Notices?