Beefy Boxes and Bandwidth Generously Provided by pair Networks
We don't bite newbies here... much

Re^4: Memory issue with large array comparison

by aaron_baugher (Curate)
on May 25, 2012 at 03:56 UTC ( #972359=note: print w/replies, xml ) Need Help??

in reply to Re^3: Memory issue with large array comparison
in thread Memory issue with large array comparison

This is what the original poster did, but with different variable names and a slightly different regex, and it ran out of memory. But isn't this O(N2)? It seems to me that it greps every item in one array against all the items in the other array, so it's really no different from this:

my @array3; OUTER: for my $a1 (@array1){ for my $a2 (@array2){ next OUTER if they_match_somehow(); } push @array3, $a1; # it didn't match anything in @array2 }

Both cases have two nested loops; it's just harder to see them in the grep-within-a-grep method.

Aaron B.
Available for small or large Perl jobs; see my home node.

Replies are listed 'Best First'.
Re^5: Memory issue with large array comparison
by ww (Archbishop) on May 26, 2012 at 00:47 UTC
    + + aaron_baugher; I didn't even notice the similarity ...and shame on me for that, as it means it's no answer to OP's original dilemma.

    I was, I realize now (thanks to your watchfulness), obsessing on the multiple responses offering use of a hash as a solution. I still think those represent something close to cargo-culting a meme, (rather than actual code) -- but not an optimal solution, since, if I read the wisdom of the sages correctly (and if they're right, of course), using a hash would be at least as memory intensive and probably more so.

    That's also an issue with map and grep (cf Eliya's observations, above), but perhaps less so than using a hash (that's another test that I haven't undertaken, but which might lead to a publishable finding). And in the same node, Eliya makes a cogent point (echoed in slightly different context by dave_the_m's code: there are a variety of ways to attack OP's problem with reduced memory demand. Yet another might be a step-wise solution: first, separate the id portion of the first dataset to a file of it's own; then identify the ids in the second file that don't have identical (or identically normalized, if that's involved, too) values.

    But, again, ++ for casting a sharp eye on the prior responses.

      Thanks for the compliment, ww. You have a point, that sometimes when everyone comes out with the same suggestion, it reflects cultish thinking. But sometimes it means there really is one best way to do it. When the problem is "find strings from one list in another list," it's just pretty hard to beat a hash lookup for speed and simplicity, and this was a pretty typical case. A hash lookup is so superior to other methods that it makes sense to go to it automatically -- without thinking, even -- unless there's some reason it won't work. It's like using strict: you should always use it unless you know enough to know when not to use it.

      On the memory issue, I'm really not sure why the grep-in-a-grep solution ran the OP out of memory. Maybe it causes temporary lists to be built in memory? In any case, a hash isn't all that memory intensive. I created a 10,000 item array, and then turned it into a hash's keys. The hash took 150% as much memory as the array. So edge cases where you have enough memory to use an array but not a hash will be unusual. I agree that solving the problem in less memory could be an interesting challenge, but only worth tackling if a hash lookup fails first.

      Aaron B.
      Available for small or large Perl jobs; see my home node.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://972359]
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (10)
As of 2018-06-25 14:56 GMT
Find Nodes?
    Voting Booth?
    Should cpanminus be part of the standard Perl release?

    Results (127 votes). Check out past polls.