http://www.perlmonks.org?node_id=1039536

AlexTape has asked for the wisdom of the Perl Monks concerning the following question:

Dear omniscient monks,

i try to translate big text files with composite charsets to a constant UTF8 encoding.

anyway my investingation to this topic run into a black whole of nescience.. whats the best way to do it especially with perl?

perhaps you can give me hints or some "simple" explanations how you would do it? i know that there are CPAN::modules to identify "non-utf8" chars but on which level? is it sensefull to take the binary way or to make a comparison on the hexadecimal level?

this is the first time i really get involved with perl into the whole charset jungle..

i´m still mindmapping ;))

kindly, perlig


---- UPDATE ----

Ok. Maybe the Input looks like this:

Textfile with 100 Chars:
40 of them were Italian (it) iso-8859-1, windows-1252
20 of them were Greek (el) iso-8859-7
all others UTF8

(see e.g. http://www.w3.org/International/O-charset-lang.html)

Now i want to process this data.. but my parser is only able to read utf8. for that i have to encode these 60 "non-utf8" chars to utf8 on a certain way..

got it? :)

i´m nearly overstrained :P can you mabe tell me something about the existing guessing modules?!

kindly perlig

$perlig =~ s/pec/cep/g if 'errors expected';