http://www.perlmonks.org?node_id=962707


in reply to Comparing and getting information from two large files and appending it in a new file

First of all, if you use consistent indentation, it'll be easier to tell where you're going wrong. Perl::Tidy can fix that up for you.

Tie::IxHash gives you an ordered hash, which you don't appear to be using, since you sort by the hash keys later anyway. I don't know anything more about that module, but I assume it introduces some overhead, so you might as well stick with a normal hash.

Your $count variable check seems to have no purpose except to skip the first line in your second file. That being the case, simply discard that line with <INPUT>; before you start your while loop, and you can remove that counter completely.

But the big problem is @genome. You load your entire first file into this array, which may or may not be necessary. But then you process it, splitting each line, every time you process a line from your second file. This adds an order of magnitude to the processing time. The better way to do it would be to process the first file into a hash from which you can lookup that information, and do that preparation once -- before you start looping through your second file.

I know you said you already tried a hash-of-hashes-of-hashes (I assume that's what HOHOH means) and it wasn't as efficient as this, but with all due respect, that's just not possible. This looks like a clear-cut case for "load file1 into a hash and check file2 against it." You might show us the code you tried in that case, so we can help you in that direction.

Aaron B.
My Woefully Neglected Blog, where I occasionally mention Perl.

  • Comment on Re: Comparing and getting information from two large files and appending it in a new file
  • Download Code