That's exactly why I asked several time for a detailed explanation of what you need. From your last message describing your requirement,your files you were just looking for records common to all files in our collection. Now your need is different.
The method I suggested is still possible, but with a slight modification. When you compare all your files with the first one, write to disk the records that were not found in the hash. You'll end up with versions of all the other files with records from.the first file filtered out. At this point, the original %seen hash is no longer needed. Your can now compare the filtered file2 (presumably significantly smaller than the original one) with file3, file4, etc (also filtered and smaller), and so one. And you end up with a situation where your input file get smaller and smaller and, at any given point in the process, you only have one file in memory.
You write stuff to disk, but the amount of data you need to handle is shrinking at each step in the process.