http://www.perlmonks.org?node_id=843140


in reply to Re^3: Optimizing a double loop
in thread Optimizing a double loop

But perhaps I can store the sets more efficiently? Maybe storing them all in one file is better?

Given a file 4GB containing 1000 sets of 1 million integers stored in packed binary form, the following Inline::C code accumulated the sums and sums of squares in just under 40 seconds. With some more effort that could be parallelised, but it probably isn't necessary:

#! perl -slw use 5.010; use strict; use Inline C => Config => BUILD_NOISY => 1; use Inline C => <<'END_C', NAME => '_842899', CLEAN_AFTER_BUILD => 0; void sumEm( SV *acc, SV *SoS, SV *in, int n ) { int *iAcc = (int*)SvPVX( acc ); int *iIn = (int*)SvPVX( in ); int *iSoS = (int*)SvPVX( SoS ); int i; for( i=0; i < n; ++i ) { iAcc[ i ] += iIn[ i ]; iSoS[ i ] += iIn[ i ] * iIn[ i ]; } } END_C use Time::HiRes qw[ time ]; my $start = time; my $acc = chr(0) x 4e6; my $sos = $acc; open I, '<:raw', 'cells' or die $!; while( sysread( I, my $row, 4e6 ) ) { sumEm( $acc, $sos, $row, 1e6 ); } close I; printf "Took: %.6f seconds\n", time() - $start; <STDIN>; my @sums = unpack 'V*', $acc; my @SoSs = unpack 'V*', $sos; print "$sums[ $_ ] : $SoSs[ $_ ]" for 0 .. $#sums; __END__ C:\test>842899.pl Took: 39.318000 seconds

So depending how long your current method takes, it might be worth considering.


For your two "other types of query", on the surface at least, it sounds like they could be quite easily solved using SQL. And calculating means and std deviations is bread & butter SQL.

Given the more realistic volumes of data you are now describing, an RDBMS, or even SQLite seems like they might be a good fit for your tasks.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.