in reply to Re: NPcomplete sometimes isn't (A benchmark) in thread NPcomplete sometimes isn't
My internet was down for a while, so I amused myself by optimizing my code a little. To be specific I only look at pairs of partitions with a positive difference, I moved the "break out early" checks out of the inner loop, and I switched from using hashes to arrays.
sub find_best_partition {
my @numbers = sort {abs($b) <=> abs($a) or $a <=> $b} @_;
# First we're going to find a "pretty good" partition. If we can,
# we'll find a partition that finishes off this way. That will
# usually let the full algorithm abort early.
my @in_partition;
my $current_remaining = 0;
for my $n (@numbers) {
if ($current_remaining < 0) {
if ($n > 0) {
push @in_partition, 1;
$current_remaining += $n;
}
else {
push @in_partition, 0;
$current_remaining = $n;
}
}
else {
if ($n > 0) {
push @in_partition, 0;
$current_remaining = $n;
}
else {
push @in_partition, 1;
$current_remaining += $n;
}
}
}
my $known_solution = $current_remaining;
# Cheat, we're going to find out the extremes.
my @max_sum_of_previous = 0;
my $sum = 0;
for my $n (@numbers) {
$sum += abs($n);
push @max_sum_of_previous, $sum;
}
# We're going to try to find partitions that add up to each
# possible number that can be added up to.
my $old;
my $new = [ [[], []] ];
my $i = 1;
my $answer;
for my $n (@numbers) {
$old = $new;
$new = [];
$i++;
my $upper = $#$old;
# Skip values too big to lead to an improvement.
if (
$upper
> $sum  $max_sum_of_previous[$i] + abs($known_solution)
) {
$upper
= $sum  $max_sum_of_previous[$i] + abs($known_solution);
}
if ($current_remaining > 0) {
if ($old>[$current_remaining]) {
my ($p1, $p2) = @{ $old>[$current_remaining] };
$answer = [$p2, $p1];
last;
}
}
elsif ($old>[$current_remaining]) {
$answer = $old>[$current_remaining];
last;
}
for my $key (0..$upper) {
my $value = $old>[$key] or next;
my ($p1, $p2) = @$value;
if ($key + $n < 0) {
$new>[$key  $n] = [$p2, [$n, $p1]];
}
else {
$new>[$key + $n] = [[$n, $p1], $p2];
}
if ($key  $n < 0) {
$new>[$n  $key] = [[$n, $p2], $p1];
}
else {
$new>[$key  $n] = [$p1, [$n, $p2]];
}
}
# Adjust $current_remaining for the fact we're skipping
# the $i'th element.
if ($in_partition[$i]) {
$current_remaining = $n;
}
else {
$current_remaining += $n;
}
}
if (not $answer) {
# Do not include any of the original partition.
$i = @numbers;
for my $j (0..$#$new) {
if ($new>[$j]) {
$answer = $new>[$j];
last;
}
}
}
# We need to flatten nested arrays, and append the tail.
my ($p1, $p2) = @$answer;
my @part_1;
while (@$p1) {
push @part_1, $p1>[0];
$p1 = $p1>[1];
}
push @part_1
, map {
$in_partition[$_] ? $numbers[$_] : ()
} $i..$#numbers;
my @part_2;
while (@$p2) {
push @part_2, $p2>[0];
$p2 = $p2>[1];
}
push @part_2
, map {
$in_partition[$_] ? () : $numbers[$_]
} $i..$#numbers;
return (\@part_1, \@part_2);
}
Re^3: NPcomplete sometimes isn't (A benchmark)
by BrowserUk (Pope) on Sep 04, 2008 at 08:29 UTC

Can I offer a suggestion. Okay, I'm going to offer a suggestion, whether you will have the time or inclination to do anything with it... :)
I think it would greatly simplify and optimise your algorithm to avoid the conditionals associated with negative numbers. You can do that by applying a simple sort to the inputs and then adjusting the whole array to make it all positive (and undo it on output):
sub partition {
my @numbers  sort{ $b <=> $a } @_;
my $adjust = 0;
if( $numbers[ 1 ] < 0 ) {
$adjust =  $number[ 1 ];
$_ += $adjust for @numbers;
}
...
$_ = $adjust for @part_1, @part_2;
return \@part_1, \@part_2;
}
Note: I attempted to make this change and offer it back to you, but there is something about your algorithm that I am not understanding, because my attempts disappear up their own jackssies (sp?:).
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
 [reply] [d/l] 

I thought of that and you can't do it. Suppose we have 10 numbers and the optimal partition has 6 in one side and 4 in the other. Then you've added 6*$adjust to one side and 4*$adjust to the other, and the partition is no longer going to look optimal.
I did think about making all numbers be their absolute value, and then reverse which partition the negative numbers go in, but that logic looked more complicated and convoluted than the way I originally wrote it.
 [reply] 
Re^3: NPcomplete sometimes isn't (A benchmark)
by gone2015 (Deacon) on Sep 22, 2008 at 14:58 UTC

Sadly, I got interested in this problem. Although it is nominally O(2**(N1)), it does seem possible to do quite well for a lot less work  and, the larger the set, the easier it appears to get !!
So the trick with any decision problem is to prune the search space. tilly observed that although the search space is 2**(N1), the number of unique nodes may be a lot smaller.
The other trick in the bag is to see if some heuristic can help us prune. As others have observed, the set can be partitioned quite effectively by:  sorting the set into descending order
 assigning values in turn to the partition with the smaller sum at the time
This 'hack' works wonderfully for sets containing numbers drawn from a range 1..n and pretty well for m..n  and, the bigger the set, the better it works !
(I tested this against putting values into one partition until it just exceeds the perfect split  starting with the set sorted in descending order and with random selection from the set. I even tried putting values into one partition until it just doesn't exceed the perfect split, and then finding the best remaining value. No better.)
Looking at the result of the initial 'hack', one can see numbers that could be swapped between partitions to improve things. This 'refinement' is a linear scan down the two partitions  a bit worse than linear if numbers can be exchanged.
I tried these heuristics on 2000 sets of numbers drawn randomly from 1..99, 2000 sets drawn from 1..9999 and 100 from 9000..9999; for sets of length 31, 100 and 500:
: : 2000 x 1..99 : 2000 x 1..9999 : 100 x 9000..9999 :
: : perf.% av. delta : perf.% av. delta : perf.% av. delta :
::::
: 31: hack : 44.4% +1.117 : 3.0% +131.786 : 0.0% +4500.720 :
: refine: 97.7% +0.024 : 3.5% +25.639 : 0.0% +1435.640 :
: best : 100.0% +0.000 : 100.0% +0.000 : 0.0% +1015.820 :
::::
: 100: hack : 84.5% +0.174 : 3.0% +38.562 : 13.0% +4.640 :
: refine: 100.0% +0.000 : 30.8% +2.292 : 95.0% +0.050 :
: best : 100.0% +0.000 : 100.0% +0.000 : 100.0% +0.000 :
::::
: 500: hack : 100.0% +0.000 : 9.7% +7.560 : 50.0% +0.750 :
: refine: 100.0% +0.000 : 99.9% +0.001 : 100.0% +0.000 :
: best : 100.0% +0.000 : 100.0% +0.000 : 100.0% +0.000 :
where the perf% is the percentage of perfectly partitioned sets after the given operation, and the av. delta is the average distance from perfect, over all sets. The 'best' line shows the best possible outcome.
Note that the longer the set:  the more likely there is to be a perfect partition,
 and the more likely the 'hack' and 'refine' operations are to finding one !!
Having got that far, how does one search for the best possible partition ? Looking at the problem as a tree walk, what I tried was:
 start at the node identified by the initial 'hack' & 'refine', and work back up the tree (making sure that all nodes are considered exactly once).
 discard parts of the tree that whose minimum result is greater than the best so far, or whose maximum result is less than the best so far  minimax.
 discard parts of the tree which have already been visited with the same partition total  similar to the trick used in 708384
 build a table for all combinations of the last 'n' numbers in the set, which short circuits the search of the lowest levels of the tree. Building the table is not free, so it's built progressively, when there appears to be a need.
 limit the search, so that doesn't disappear for ever !
This turned out to quite tricky for a bearoflittlebrain, and, with all the debug and test stuff, runs to 2800 lines.
Anyway, I ran this on the same test data, and collected the average running time per partition operation: : : 2000 x 1..99 : 2000 x 1..9999 : 100 x 9000..9999 :
::::
: 31 : GMCH 127.20 uSec : GMCH 4.26 mSec : GMCH 615.67 mSec :
: : tilly 32.18 mSec : tilly 2.55 Secs : tilly 6.34 Secs :
:::::
: 100 : GMCH 298.18 uSec : GMCH 9.59 mSec : GMCH 733.82 uSec :
: : tilly 606.84 mSec : tilly 65.82 Secs : tilly 109.46 mSec :
:::::
: 500 : GMCH 1.76 mSec : GMCH 1.87 mSec : GMCH 1.84 mSec :
: : tilly 19.89 Secs : tilly ***** : tilly ***** :
where 'GMCH' is my code, 'tilly' is the code given in 708384, uSec is microSecs and mSec is millisecs. The '*****' is where I terminated the test, because it had run out of memory and started to thrash.
The code is available here http://www.highwayman.com/perls/part_GMCH_v1.14.pm I can post it here if y'all like.
What has surprised me is that this works as well as it does, particularly as the set gets bigger ! I wonder if I just haven't found the right test data :( I'd be pleased to hear of stuff that this won't cope with.
For completeness, I'm only handling +ve values. To cope with ve values the minimax and other pruning has to be modified to do different things depending on which side of the perfect partition one is on at any time.
 [reply] [d/l] [select] 

That's a nice approach.
If you want to extend it to negative numbers, it will be less work than you think if you use the right hack. As far as the difference is concerned, having, say, 16 in one partition is exactly the same as having 16 in the other one. So flip all of your signs to positive, then when you've found the solution, delete the appropriate positive ones from one partition while inserting negatives in the other. Voila! Rather than a pervasive logic change you just have to preprocess the list and postprocess your answer.
 [reply] 

It's late for this now, but thanks a lot for the effort.
 [reply] 

