|Perl: the Markov chain saw|
With 18,000 pages of 300+ words that is at least 3 million words to process. Provided you have the memory by far the fastest thing to do will be to put the word lists into hashes in memory. You would then do something like:
By splitting on the boundary we will pass punctuation to the check_word() sub but it should not find a match and thus just return ''. The return order from the check word sub detemines our preference. If it could be german we assume it is. If not we see if it could be english, french or italian in that order. If we don't know what it is we call it german and press on.
You should modify this code to count the number of putative german, english, french and italian words in a document. If you find that the english count is >> german then you would reprocess the document with a different check_word() function. In this function you would change the priority order so that english is returned first.... Same for each of the other languagues
You can get an extensive list (250,000) of english words as a flat file word list from http://www.puzzlers.org/secure/wordlists/dictinfo.php The puzzle people seem to have these lists easily and freely available as text files. I presume the same applies for languages other than english.
Any sort of database means disk reads which will be hundreds or thousands of times slower than using an in memory hash table lookup. With memory so cheap and time expensive....
Regardless of what you do you want your word lists to be as complete a possible and do any pre processing before you start on the text.