Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things

Re^2: Challenge: Number of unique ways to reach target sum

by fergal (Chaplain)
on Feb 15, 2006 at 01:50 UTC ( #530280=note: print w/replies, xml ) Need Help??

in reply to Re: Challenge: Number of unique ways to reach target sum
in thread Challenge: Number of unique ways to reach target sum

The code can be made to not only count the solutions, but to iterate over them as well (using a callback & accumulation of the solution): ... But this takes a lot longer, as memoizing does you no good.

No need to abandon Memoize. First generate all the answers (takes about 6 seconds on my computer, your original takes about a second less just to count them), then print them out (this can take considerably longer depending on the format.

The key is to not generate strings in the first stage but to generate a tree (a very lispish tree). So say our target is 197 and we can use 2 coins. We would generate

$VAR1 = [ [ '99', [ '98' ] ], [ '100', [ '97' ] ] ];

You can see that there are 2 possibilities, 100+97 and 99+98. Slightly more complex 3 coins and a target of 295 the result is

$VAR1 = [ [ '100', [ [ '98', [ '97' ] ], [ '99', [ '96' ] ] ] ] ];

This you can think of this as 100 + (98+97 or 99+96).

In some ways you can say that just dumping out this stucture answers the question. Recursing over it and printing out x+y+...+z for each solution goes at a bit more than 100,000 results per second on my computer (which is slower than I'd expected).

Memory-wise, this is very efficient because when the same subtree appears in 2 places, you don't get 2 copies, you just get a reference to the same subtree.

It is possible to use memoization for the printing process too but the problem is that you can't memoize the whole thing or you'll run out of memory. I've written something that starts to use memoized data after a certain level and it's doing > 1 million per second (and it's accelerating as it benefits more and more from the cache). I reckon it should take about 3 hours but I don't know how much acceleration there is so I'm going for a bath :)

Code for the simpler version is below (based on blokhead's original)

use List::Util 'min'; use POSIX 'ceil'; use strict; use warnings; sub make_ways { my ($N, $S, $T) = @_; # one coin left can we do it? if ($S == 1) { if ($T <= $N) { return ["$T"]; } else { return 0; } } my $min = (2*$T-$S+$S**2)/(2*$S); ## more correctly, ceil() of thi +s my $max = min( $T-(($S-1)*$S/2), $N ); my @all_ways; for my $K ($min .. $max) { my $ways = make_ways( $K-1, $S-1, $T-$K ); if ($ways) { push(@all_ways, ["$K", $ways]); } } if (@all_ways) { return \@all_ways; } else { return 0 } } use Memoize; memoize 'make_ways'; #useful for printing out details of a sane set use Data::Dumper; my $ways = make_ways(100, 10, 667); #my $ways = make_ways(20, 5, 30); #print Dumper($ways); print_ways($ways, "", 3); my $printed = 0; sub print_ways { my ($ways, $base) = @_; for my $way (@$ways) { if (ref $way) { my ($coin, $more_ways) = @$way; my $new_base = length($base) ? "$coin+$base" : $coin; print_ways($more_ways, $new_base); } else { print STDERR "printed $printed\n" unless (++$printed % 1000000); print "$base+$way\n"; } } }

Replies are listed 'Best First'.
Re^3: Challenge: Number of unique ways to reach target sum
by fergal (Chaplain) on Feb 15, 2006 at 03:03 UTC

    When I came back it was using 500M of memory (growing very very slowly) and was clocking 1.9M answers per second which puts it at a total time between 2 and 3 hours. Which isn't bad.

    I'm leaving it running overnight on a machine to verfiy that.

    By the way, to store the answers would require > 400GB of space!

    The code below has functions. string_ways() generates an array of strings from a tree of ways of summing things, it's memoized. nasty_print_ways() is like print_ways() but it also takes a depth. When it reaches that depth it starts making calls to string_ways() to get lists of strings to finish off the current way. This avoids recomputing a lot of stuff. You can adjust the depth if you have more or less memory but I think a change of 1 results in about a 100-fold difference in memory usage.

    This dumps out performance other stats every million lines. You get how many have been printed, how many times string_ways was called, how many times it was really called (that is the memoization didn't help us) and a rate of lines/CPU second

    nasty_print_ways($ways, "", 6); # do my own memoizing as I couldn't get Memoize to work, possibly # to do with the arguments being array refs my %strings; my $real_stringed = 0; sub string_ways { my $ways = $_[0]; if (my $strings = $strings{$ways}) { #die "already $strings"; return $strings } $real_stringed++; my @strings; for my $way (@$ways) { if (ref $way) { my ($coin, $more_ways) = @$way; # print STDERR "coin is $coin, making ".@{$more_ways}." ways\n"; push(@strings, map {"$_+$coin"} (@{string_ways($more_ways)})); } else { push(@strings, $way); } } return $strings{$ways} = \@strings; } my $stringed = 0; sub nasty_print_ways { my ($ways, $base, $depth) = @_; if ($depth == 0) { $stringed++; my $strings = string_ways($ways); for my $string (@$strings) { $printed++; print STDERR "p/s/r $printed / $stringed / $real_stringed r +ate = ".($printed/(times())[0])."\n" unless ($printed % 1000000); print "$string+$base\n"; } } else { $depth--; for my $way (@$ways) { if (ref $way) { my ($coin, $more_ways) = @$way; my $new_base = length($base) ? "$coin+$base" : $coin; nasty_print_ways($more_ways, $new_base, $depth); } else { print STDERR "printed $printed\n" unless (++$printed % 100000) +; print "way+$base\n"; } } } }

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://530280]
[Discipulus]: ;=)
[Discipulus]: you are rigth: 2018 had less bad days than 2017, until now!

How do I use this? | Other CB clients
Other Users?
Others exploiting the Monastery: (6)
As of 2018-01-23 09:00 GMT
Find Nodes?
    Voting Booth?
    How did you see in the new year?

    Results (242 votes). Check out past polls.