Pathologically Eclectic Rubbish Lister  
PerlMonks 
comment on 
( #3333=superdoc: print w/replies, xml )  Need Help?? 
It is, at best, very rude to say “total garbage” in response to a post ... it doesn’t make you look like a genius, merely a boor. (I searched the entire thread for the word “best” and did not find it. Perhaps your eyes are better than mine.) That being said, if the requirement is for “a matching entry,” then the technique that I described will work extremely well. Strictly speaking, yes, it is possible for (say) a hashcollision to occur, but with a strong algorithm like SHA1 it basically isn’t going to happen. Any exploitable “predictable fact” about the actual data can be profitably used to reduce the searchspace, and (IMHO) in a situation such as this, pragmatically must be. Even something as basic as “the total number of 1’s” can be pressed into service. If you can “reasonably predict” that the differences between the searchedfor array and the “best fit” that will be found will consist, let’s say, of a changeofstate of no more than (some) n positions, then even a bruteforce search could be limited to consider only the candidates which fall into that range, perhaps starting with any exactmatches and then working outward ± x for x in (1..n). This does, of course, open up the possibility of a statistical Type2 Error (concluding that no bestmatch exists when in fact one does), but this might be judged to be either acceptable or necessary. (Or not ...) If necessity really must become the mother of invention, and once again if you know that there are exploitable characteristics of the data, it is also possible to apply hashing or onescounting to slices of the total vector. Instead of merely counting all the 1’s, count them in every (say) thousand bits. Apply some useful heuristic to this vector of sums to decide whether you choose to examine the whole thing. In the end, the problem won’t be completelyabstract, nor will be its solution. There must be something, in the real world, that stipulates what is and what is not “best,” or even “plausible.” It’s my opinion that you must solve this problem, at least in significant part, by reducing the total number of vectors that you consider, and by selecting for consideration only those which are “most likely.” Representational optimizations such as the use of bitvectors may also be important, but even these can’t be applied in a bruteforce way. You’ll simply never get the work done. In reply to Re: Comparing two arrays
by sundialsvc4

