http://www.perlmonks.org?node_id=218103


in reply to detecting the language of a word?

With 18,000 pages of 300+ words that is at least 3 million words to process. Provided you have the memory by far the fastest thing to do will be to put the word lists into hashes in memory. You would then do something like:

my $german = get_lang_hash('german.txt'); my $english = get_lang_hash('english'); my $french = get_lang_hash('french'); my $italian = get_lang_hash('italian'); my $new_text = ''; for my $word ( split /\b/, $text ) { my $lang = check_word($word); $new_text .= $lang ? qq!<span lang="$lang">word</span>! : $word; } sub check_word { my ($word) = @_; print "got $word\n"; return '' if $german->{$word}; return 'en' if $english->{$word}; return 'fr' if $french->{$word}; return 'il' if $italian->{$word}; return ''; } sub get_lang_hash { my $dict = shift; my %hash; open DICT, $dict or die $!; while (<DICT>) { chomp; $hash{$_}++; } close DICT; return \%hash; }

By splitting on the boundary we will pass punctuation to the check_word() sub but it should not find a match and thus just return ''. The return order from the check word sub detemines our preference. If it could be german we assume it is. If not we see if it could be english, french or italian in that order. If we don't know what it is we call it german and press on.

You should modify this code to count the number of putative german, english, french and italian words in a document. If you find that the english count is >> german then you would reprocess the document with a different check_word() function. In this function you would change the priority order so that english is returned first.... Same for each of the other languagues

You can get an extensive list (250,000) of english words as a flat file word list from http://www.puzzlers.org/secure/wordlists/dictinfo.php The puzzle people seem to have these lists easily and freely available as text files. I presume the same applies for languages other than english.

Any sort of database means disk reads which will be hundreds or thousands of times slower than using an in memory hash table lookup. With memory so cheap and time expensive....

Regardless of what you do you want your word lists to be as complete a possible and do any pre processing before you start on the text.

cheers

tachyon

s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print