|Perl: the Markov chain saw|
I'm starting to work on a rather big project, which involves converting some 18.000 HTML documents of sometimes dubious quality to w3.org-validatable, accessible HTML 4.01. The client is a city government in Austria that wants to / has to comply to Level A of the W3 WAI specifications. (http://www.w3.org/WAI/). Most of this work will be done by a HTML::Parser based parser.
Currently, the most tricky part seems to be language detection, and therefor I seek some wisdom:
The content is basically in German. But it is interspersed with some foreign words, mostly English. E.g.: "email". All foreign words should be marked using something like <span lang='en'>. The reason for this is that browsers with voice output need to know if a word should be pronounced the standard way (i.e. german) or somewhat differently (i.e. english)
E.g: if you pronounce "email" as if it was a german word, it sounds like the german word for "enamel", which is "Email" (btw, enamel is this stuff: http://www.artlex.com/ArtLex/e/enamel.html
So, how can I decide if a given word is German, English, French or Italian?
My best idea so far is to find some dictionary files for each language and check if the word is in one of those. For performance reasons, I'm planning to put the dicts into a SQL-Database (or maybe a DB-file? - but I know SQL better, so..) and maybe implement some caching.
I couldn't find anything suited for this task on CPAN ..
I can probably also use some sort of non-Perl solution, as long as it's free and runs on Linux.
Any pointers/comments about