|Problems? Is your data what you think it is?|
Re: Comparing large filesby LanX (Chancellor)
|on Feb 11, 2014 at 19:30 UTC||Need Help??|
If 10 "Mg" is just the size this should result in 1e6 words.
IIRC does one hash entry result approx. 100 bytes overhead, so putting all errors in a hash should be feasible even on my pity NetBook.
Parse the pronunciation-file line by line and build a lookup hash.
Then parse the other file per line and look for missing entries.
If you really have RAM problems try splitting the hash into several disjunct ones (like for every 10% of the file) and parse the second file once for each hash.
Shouldn't take longer then seconds (at most minutes)
( addicted to the Perl Programming Language)