|Welcome to the Monastery|
Comment onby gods
|on Feb 11, 2000 at 00:06 UTC||Need Help??|
First of all, if you use consistent indentation, it'll be easier to tell where you're going wrong. Perl::Tidy can fix that up for you.
Tie::IxHash gives you an ordered hash, which you don't appear to be using, since you sort by the hash keys later anyway. I don't know anything more about that module, but I assume it introduces some overhead, so you might as well stick with a normal hash.
Your $count variable check seems to have no purpose except to skip the first line in your second file. That being the case, simply discard that line with <INPUT>; before you start your while loop, and you can remove that counter completely.
But the big problem is @genome. You load your entire first file into this array, which may or may not be necessary. But then you process it, splitting each line, every time you process a line from your second file. This adds an order of magnitude to the processing time. The better way to do it would be to process the first file into a hash from which you can lookup that information, and do that preparation once -- before you start looping through your second file.
I know you said you already tried a hash-of-hashes-of-hashes (I assume that's what HOHOH means) and it wasn't as efficient as this, but with all due respect, that's just not possible. This looks like a clear-cut case for "load file1 into a hash and check file2 against it." You might show us the code you tried in that case, so we can help you in that direction.
In reply to Re: Comparing and getting information from two large files and appending it in a new file