Recently Pepe asked an interesting question at Divide array of integers into most similar value halves. Given an array of integers he wanted to split it into two subarrays whose sum was as close as possible together. Many people informed him that this was a very hard problem, references were provided showing that this solves the Partition Problem, which is known to be NPcomplete. Luckily Pepe did not need an exact answer, and there is a greedy algorithm that was good enough for his needs.
What makes this problem interesting is that the many confident answers and external references notwithstanding, the actual problem the OP has is almost certainly not NPcomplete! As proof I have an implementation on my computer that reliably partitions an array of 100 random integers of size 0999 in under 7 seconds. That is, the the following code reliably runs in under 7 seconds and is guaranteed to find the best answer:
my ($x, $y)
= find_best_partition(map int(rand(1000)), 1..100);
How is this possible?
The trick is that when it comes to NPcomplete problems, the devil is in the details. If you change even one detail of the problem, what was NPcomplete can suddenly become much more tractable. Sometimes the detail is so easy to miss as to be virtually impossible to see. Such is the case here.
The relevant detail in this case is that we are dealing with an array of small integers. If we have 100 integers in the range 01000 in side and look at the partitions, the difference of the size of the two partitions has to be in the range 100,000 to 100,000. Of course there are 2^{100} possible partitions, but the same difference in sizes will show up from many, many possible partitions. But we don't care about enumerating the possible partitions, only the possible sizes. And while a range of 200,000 possible sizes is not exactly small, it is still tractable.
With arbitrary integers this idea would fail hard, because nothing stops you from having integers of size 2^{100}. But in the real world when we say "integer", we usually don't actually mean "arbitrary integer". We mean "small integer" and that difference can be important.
In this case it looks like we want to create a hash of possible differences in the sum of the partitions. The point of a hash will find and eliminate unneeded duplicate ways of getting the same difference, keeping the problem down to a reasonable size. A back of the envelope estimate says that if we do it right, performance should be O(n^{2}) in the size of the initial dataset. (A darned sight better than the naive 2^{O(n)}!) After handling a few technical details, that leads to the following naive implementation that reliably runs on my laptop in about 20 seconds:
sub find_best_partition {
# We're going to try to find partitions that add up to each possible
# number that can be added up to.
my $old;
my $new = {0 => [[], []]};
for my $n (sort {$a <=> $b} @_) {
$old = $new;
$new = {};
while (my ($key, $value) = each %$old) {
my ($p1, $p2) = @$value;
$new>{$key + $n} = [[$n, $p1], $p2];
$new>{$key  $n} = [$p1, [$n, $p2]];
}
}
my $best = each %$new;
while (my $difference = each %$new) {
if (abs($difference) < abs($best)) {
$best = $difference;
}
}
# We need to flatten our nested arrays.
my ($p1, $p2) = @{ $new>{$best} };
my @part_1;
while (@$p1) {
push @part_1, $p1>[0];
$p1 = $p1>[1];
}
my @part_2;
while (@$p2) {
push @part_2, $p2>[0];
$p2 = $p2>[1];
}
return (\@part_1, \@part_2);
}
Of course 20 seconds is a little slower than I'd like. So I reasoned that most of my time is spent looking at partitions that I should know are not going to lead to the best answer. So I figured that I should first try to find a greedy solution, and then skip any partial partition which could not possibly match the greedy one when it is filled out. Of course this filtering would be most effective if I put the biggest numbers first, because the sume of a few big numbers at the start could be as big as the sum of many little numbers at the end.
Of course when I implemented this I also noticed that often you did find a perfect partition. So I added a check for whether the starting partition we have plus the rest of the initial greedy partition is a perfect partition. If it is, then stop immediately. In that case you get the right answer virtually instantaneously.
This version often finishes in a couple of hundredth's of a second, and the rest of the time finishes on my laptop in under 7 seconds. The code is kind of long, so I'll hide it.
So the next time you think you have an NPcomplete problem, before throwing your hands up in despair, think about whether there is anything, anything at all, that you can possibly use to turn it into a much simpler problem. Usually you will fail, but every so often you'll get lucky. As in this case.
For another example of a case where a problem looked a lot like an NPcomplete one, see Puzzle: need a more general algorithm. See Re: Balance columns for the surprisingly quick solution.
Re: NPcomplete sometimes isn't (A benchmark)
by BrowserUk (Pope) on Sep 02, 2008 at 09:17 UTC

Here are some typical results from benchmarking your routine along with some of the routines in the other thread. (I omitted ikegami's as he wasn't happy with it):
c:\test>708290b LOG=4 MAX=1e3
Testing buk with 10 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.000244 seconds
Testing funky with 10 random values in the range 0 .. 1e3 Differen
+ce:= 65; took 0.000118 seconds
Testing tilly with 10 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.004491 seconds
Testing tye with 10 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.001024 seconds
Testing buk with 100 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.005047 seconds
Testing funky with 100 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.000587 seconds
Testing tilly with 100 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 15.174596 seconds
Testing tye with 100 random values in the range 0 .. 1e3
+ ******* timed out after 60 seconds
Testing buk with 1000 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.015625 seconds
Testing funky with 1000 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.007535 seconds
Testing tilly with 1000 random values in the range 0 .. 1e3
+ ******* timed out after 60 seconds
Testing tye with 1000 random values in the range 0 .. 1e3
+ ******* timed out after 60 seconds
Testing buk with 10000 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.075423 seconds
Testing funky with 10000 random values in the range 0 .. 1e3 Differen
+ce:= 1; took 0.190954 seconds
Testing tilly with 10000 random values in the range 0 .. 1e3
+ ******* timed out after 60 seconds
Testing tye with 10000 random values in the range 0 .. 1e3
+ ******* timed out after 60 seconds
c:\test>708290b LOG=4 MAX=1e4
Testing buk with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.000326 seconds
Testing funky with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.000102 seconds
Testing tilly with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.003578 seconds
Testing tye with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.001063 seconds
Testing buk with 100 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.044324 seconds
Testing funky with 100 random valuesin the range 0 .. 1e4 Differenc
+e:= 7; took 0.000601 seconds
Testing tilly with 100 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 15.901730 seconds
Testing tye with 100 random valuesin the range 0 .. 1e4
+ ******* timed out after 60 seconds
Testing buk with 1000 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 0.515625 seconds
Testing funky with 1000 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 0.008393 seconds
Testing tilly with 1000 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 0.013012 seconds
Testing tye with 1000 random valuesin the range 0 .. 1e4 Differenc
+e:= 92467; took 0.016869 seconds
Testing buk with 10000 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 4.654061 seconds
Testing funky with 10000 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 0.136925 seconds
Testing tilly with 10000 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 0.148673 seconds
Testing tye with 10000 random valuesin the range 0 .. 1e4 Differenc
+e:= 2252553; took 1.632284 seconds
c:\test>708290b LOG=4 MAX=1e4
Testing buk with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 2; took 0.003548 seconds
Testing funky with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 198; took 0.000107 seconds
Testing tilly with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 2; took 0.003799 seconds
Testing tye with 10 random valuesin the range 0 .. 1e4 Differenc
+e:= 2; took 0.001117 seconds
Testing buk with 100 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 0.213406 seconds
Testing funky with 100 random valuesin the range 0 .. 1e4 Differenc
+e:= 2; took 0.000587 seconds
Testing tilly with 100 random valuesin the range 0 .. 1e4 Differenc
+e:= 0; took 0.003341 seconds
Testing tye with 100 random valuesin the range 0 .. 1e4 Differenc
+e:= 1282; took 0.001121 seconds
Testing buk with 1000 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.007796 seconds
Testing funky with 1000 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.007606 seconds
Testing tilly with 1000 random valuesin the range 0 .. 1e4
+ ******* timed out after 60 seconds
Testing tye with 1000 random valuesin the range 0 .. 1e4
+ ******* timed out after 60 seconds
Testing buk with 10000 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 4.281250 seconds
Testing funky with 10000 random valuesin the range 0 .. 1e4 Differenc
+e:= 1; took 0.150492 seconds
Testing tilly with 10000 random valuesin the range 0 .. 1e4
+ ******* timed out after 60 seconds
Testing tye with 10000 random valuesin the range 0 .. 1e4
+ ******* timed out after 60 seconds
The full benchmark code is here:
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
 [reply] [d/l] [select] 

perl funky.pl 400 402 521 735 758 191 307 679 776 877
Target is 2823
First container: sum(877 776 758 402) = 2813
Second container: sum(735 679 521 400 307 191) = 2833
but a better partition is
First container: sum(400 402 521 735 758) = 2816
Second container: sum(191 307 679 776 877) = 2830
update: ah, of course. That is what your "Difference" output field reveals....  [reply] [d/l] [select] 

Indeed, but I don't think that he ever made that claim? (By constrast, tilly (TTBOMK correctly) did.)
And neither will mine, always. Much of the time it does. But, even with the same inputs, due to the semirandom nature of the algorithm, it occasionally will give up trying before it finds the optimum solution. I've never seen it be very far away from optimum, but it doesn't always find it within the specified iterations bound.
But, given that it will most times partition a 1 million value dataset within 8 to 10 seconds:
Testing buk with 1000000 random values in the range 0 .. 1e4 Differ
+ence:= 0; took 8.156250 seconds
Where a brute force solution for 100 values would theoretically take around 17 years*, the occasional imperfect solution is fair trade I reckon.
(*)I tried to use the same math to calculate the theoretical time for 1 million value dataset, but nothing I have access to can calculate 2**1e6 in a timeframe that I was prepared to wait for :)
 2**100 ~ 10e30;
 2**1000 ~ 10e300;
 2**10000 ~ 10e3000;
 2**100000 ~ 10e30000;
 So, by projection: 2**1000000 ~ 10e300000;
Which if my sleep deprived brain isn't dissin' me, that represents something like 100,000 years of processing.
Will I trade that for a 10 second response that's occasionally nonoptimal. Sure :)
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
 [reply] [d/l] [select] 


My internet was down for a while, so I amused myself by optimizing my code a little. To be specific I only look at pairs of partitions with a positive difference, I moved the "break out early" checks out of the inner loop, and I switched from using hashes to arrays.
 [reply] [d/l] 

Can I offer a suggestion. Okay, I'm going to offer a suggestion, whether you will have the time or inclination to do anything with it... :)
I think it would greatly simplify and optimise your algorithm to avoid the conditionals associated with negative numbers. You can do that by applying a simple sort to the inputs and then adjusting the whole array to make it all positive (and undo it on output):
sub partition {
my @numbers  sort{ $b <=> $a } @_;
my $adjust = 0;
if( $numbers[ 1 ] < 0 ) {
$adjust =  $number[ 1 ];
$_ += $adjust for @numbers;
}
...
$_ = $adjust for @part_1, @part_2;
return \@part_1, \@part_2;
}
Note: I attempted to make this change and offer it back to you, but there is something about your algorithm that I am not understanding, because my attempts disappear up their own jackssies (sp?:).
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
 [reply] [d/l] 


Sadly, I got interested in this problem. Although it is nominally O(2**(N1)), it does seem possible to do quite well for a lot less work  and, the larger the set, the easier it appears to get !!
So the trick with any decision problem is to prune the search space. tilly observed that although the search space is 2**(N1), the number of unique nodes may be a lot smaller.
The other trick in the bag is to see if some heuristic can help us prune. As others have observed, the set can be partitioned quite effectively by:  sorting the set into descending order
 assigning values in turn to the partition with the smaller sum at the time
This 'hack' works wonderfully for sets containing numbers drawn from a range 1..n and pretty well for m..n  and, the bigger the set, the better it works !
(I tested this against putting values into one partition until it just exceeds the perfect split  starting with the set sorted in descending order and with random selection from the set. I even tried putting values into one partition until it just doesn't exceed the perfect split, and then finding the best remaining value. No better.)
Looking at the result of the initial 'hack', one can see numbers that could be swapped between partitions to improve things. This 'refinement' is a linear scan down the two partitions  a bit worse than linear if numbers can be exchanged.
I tried these heuristics on 2000 sets of numbers drawn randomly from 1..99, 2000 sets drawn from 1..9999 and 100 from 9000..9999; for sets of length 31, 100 and 500:
: : 2000 x 1..99 : 2000 x 1..9999 : 100 x 9000..9999 :
: : perf.% av. delta : perf.% av. delta : perf.% av. delta :
::::
: 31: hack : 44.4% +1.117 : 3.0% +131.786 : 0.0% +4500.720 :
: refine: 97.7% +0.024 : 3.5% +25.639 : 0.0% +1435.640 :
: best : 100.0% +0.000 : 100.0% +0.000 : 0.0% +1015.820 :
::::
: 100: hack : 84.5% +0.174 : 3.0% +38.562 : 13.0% +4.640 :
: refine: 100.0% +0.000 : 30.8% +2.292 : 95.0% +0.050 :
: best : 100.0% +0.000 : 100.0% +0.000 : 100.0% +0.000 :
::::
: 500: hack : 100.0% +0.000 : 9.7% +7.560 : 50.0% +0.750 :
: refine: 100.0% +0.000 : 99.9% +0.001 : 100.0% +0.000 :
: best : 100.0% +0.000 : 100.0% +0.000 : 100.0% +0.000 :
where the perf% is the percentage of perfectly partitioned sets after the given operation, and the av. delta is the average distance from perfect, over all sets. The 'best' line shows the best possible outcome.
Note that the longer the set:  the more likely there is to be a perfect partition,
 and the more likely the 'hack' and 'refine' operations are to finding one !!
Having got that far, how does one search for the best possible partition ? Looking at the problem as a tree walk, what I tried was:
 start at the node identified by the initial 'hack' & 'refine', and work back up the tree (making sure that all nodes are considered exactly once).
 discard parts of the tree that whose minimum result is greater than the best so far, or whose maximum result is less than the best so far  minimax.
 discard parts of the tree which have already been visited with the same partition total  similar to the trick used in 708384
 build a table for all combinations of the last 'n' numbers in the set, which short circuits the search of the lowest levels of the tree. Building the table is not free, so it's built progressively, when there appears to be a need.
 limit the search, so that doesn't disappear for ever !
This turned out to quite tricky for a bearoflittlebrain, and, with all the debug and test stuff, runs to 2800 lines.
Anyway, I ran this on the same test data, and collected the average running time per partition operation: : : 2000 x 1..99 : 2000 x 1..9999 : 100 x 9000..9999 :
::::
: 31 : GMCH 127.20 uSec : GMCH 4.26 mSec : GMCH 615.67 mSec :
: : tilly 32.18 mSec : tilly 2.55 Secs : tilly 6.34 Secs :
:::::
: 100 : GMCH 298.18 uSec : GMCH 9.59 mSec : GMCH 733.82 uSec :
: : tilly 606.84 mSec : tilly 65.82 Secs : tilly 109.46 mSec :
:::::
: 500 : GMCH 1.76 mSec : GMCH 1.87 mSec : GMCH 1.84 mSec :
: : tilly 19.89 Secs : tilly ***** : tilly ***** :
where 'GMCH' is my code, 'tilly' is the code given in 708384, uSec is microSecs and mSec is millisecs. The '*****' is where I terminated the test, because it had run out of memory and started to thrash.
The code is available here http://www.highwayman.com/perls/part_GMCH_v1.14.pm I can post it here if y'all like.
What has surprised me is that this works as well as it does, particularly as the set gets bigger ! I wonder if I just haven't found the right test data :( I'd be pleased to hear of stuff that this won't cope with.
For completeness, I'm only handling +ve values. To cope with ve values the minimax and other pruning has to be modified to do different things depending on which side of the perfect partition one is on at any time.
 [reply] [d/l] [select] 


Re: NPcomplete sometimes isn't
by Limbic~Region (Chancellor) on Sep 02, 2008 at 12:28 UTC

 [reply] 

I had read it, but it had slipped my mind. And yes, that is a very good example of a problem being feasible despite looking like it was NPcomplete. And I like Re: How many words does it take?'s comment that when we think we're faced with an NPcomplete problem, we are probably trying to solve the wrong problem.
 [reply] 
Re: NPcomplete sometimes isn't
by BrowserUk (Pope) on Sep 02, 2008 at 07:38 UTC

and is guaranteed to find the best answer
I think you may have a bug. Given the input set [400 402 521 735 758 191 191 307 679 776 877], your code produces this partition:
[400 402 521 735 758 191] := 3007
[191 307 679 776 877] := 2830
But there is a better partition:
[679 776 307 877 191] := 2830
[400 402 521 758 735] := 2816
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
 [reply] [d/l] [select] 

 [reply] 

 [reply] 

 [reply] 

No. I produce the "input list" shown in my post, by combining the partitions output from tilly's code. (Because it wasn't actually displayed anywhere.)
But, as tilly explained, under certain circumstances, the OP code he posted contained a bug that meant it would duplicate one value. He quickly corrected that problem, but not before I downloaded and ran his code.
Examine what is said, not who speaks  Silence betokens consent  Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
 [reply] 

Your second output has one value less then the first one.
 [reply] 
Re: NPcomplete sometimes isn't
by moritz (Cardinal) on Sep 02, 2008 at 07:48 UTC

The relevant detail in this case is that we are dealing with an array of small integers.
Are we? Do you know that? Just because the examples in the OP were all small integers, does it mean we can assume that it's always the case?
Your solution is very nice, and if it is applicable it's certainly one of the best solutions and I like it. Still you should verify your assumptions before making them (or was it actually mentioned in the original thread, and I missed it?)
 [reply] 

Well to some extent there is a guarantee on that. If they're small enough to be represented as integers in raw Perl, then they're limited in size and my solution is faster than the NPcomplete one beyond some (generally not very large) number of numbers.
Of course that guarantee is not particularly useful, because the program won't necessarily run on a real machine without running out of RAM.
At which point I point out my weasel words, "almost certainly not NPcomplete" and say that my "almost certainly" was based on my strong suspicion that most concrete programming problems which would result in this kind of question coming up involve small integers. With the critical question being how small.
 [reply] 
Re: NPcomplete sometimes isn't
by Argel (Prior) on Sep 03, 2008 at 00:13 UTC

Please use readmore tags! The spoiler tag has four different display options, including a table view. Thus usig the spoiler tag to save space does not always work.
Elda Taluta; Sarks Sark; Ark Arks
 [reply] 

 [reply] 

