in reply to How to sanely handle unicode in perl?

The second case fails horribly. I have no idea why.
FYI, "" is not "o" in ASCII. "" doesn't exist in ASCII. \xC3 and \xB6 don't exist (have no meaning) in ASCII either. When you specify "LC_CTYPE=C", Perl doesn't know what the bytes \xC3 and \xB6 are supposed to be (that's why it complains that it cannot map them to Unicode).

If that's any consolation, it's IMPOSSIBLE to sanely mix one byte encodings (256 symbols max) and Unicode (> 1000000 codepoints). Perl itself is a great example of this (but that has been already discussed to death on this forum...)

Replies are listed 'Best First'.
Re^2: How to sanely handle unicode in perl?
by Anonymous Monk on Mar 21, 2015 at 10:13 UTC
    oh, btw... the least painful way to handle this is to ask the user about preferred encoding. Use utf-8 by default, but let your program accept a command line option to change encoding, something like ./ -encoding=latin-1 ...
Re^2: How to sanely handle unicode in perl?
by Sec (Monk) on Mar 23, 2015 at 10:16 UTC
    If you check the source I posted, the open specifies ":encoding(utf8)". And with that \xC3\xB6 does exist and is valid. So I don't really understand what you are talking about.
      I'm talking about locale (from use open qw(:std :locale)). encoding doesn't override locale (maybe it should? but it doesn't. They basically stack). Note that using :raw simply removes the locale layer (like removing use open ... entirely, because by default Perl ignores locales... for the most part).
        Also note that "echo \xC3\xB6" won't always work... conceptually, Perl's strings are sequences of integers; there is no guarantee about their internal representation - in particular, no guarantee that C3 and B6 are actually bytes. Try "echo \xC3\xB6\x{FFFD}" and see what happens...