|There's more than one way to do things|
Composite Charset Data to UTF8?by AlexTape (Monk)
|on Jun 18, 2013 at 10:21 UTC||Need Help??|
AlexTape has asked for the
wisdom of the Perl Monks concerning the following question:
Dear omniscient monks,
i try to translate big text files with composite charsets to a constant UTF8 encoding.
anyway my investingation to this topic run into a black whole of nescience.. whats the best way to do it especially with perl?
perhaps you can give me hints or some "simple" explanations how you would do it? i know that there are CPAN::modules to identify "non-utf8" chars but on which level? is it sensefull to take the binary way or to make a comparison on the hexadecimal level?
this is the first time i really get involved with perl into the whole charset jungle..
i´m still mindmapping ;))
---- UPDATE ----
Ok. Maybe the Input looks like this:
Textfile with 100 Chars:
40 of them were Italian (it) iso-8859-1, windows-1252
20 of them were Greek (el) iso-8859-7
all others UTF8
(see e.g. http://www.w3.org/International/O-charset-lang.html)
Now i want to process this data.. but my parser is only able to read utf8. for that i have to encode these 60 "non-utf8" chars to utf8 on a certain way..
got it? :)
i´m nearly overstrained :P can you mabe tell me something about the existing guessing modules?!
$perlig =~ s/pec/cep/g if 'errors expected';