http://www.perlmonks.org?node_id=916333


in reply to JSON, UTF-8 and Filehandles

is_utf8 is used to check if perl's internal representation of a string has been "upgraded" to utf8. That is, the string cannot possibly be represented in ascii, such as your: my $test = "\x{100}\x{2764}";. When you decode a utf8 string it is converted to perl's internal representation. This just happens to be utf8, or something close, but it doesn't have to be and worrying about it is just a waste of time.

Really, you only have to worry about decoding input and encoding output (edit: I had encoding/decoding backwards!). You received mangled text on your files/screen because you are having JSON encode it's results as a utf8-encoded string. JSON returns a string of bytes that are utf8 encoded.

Next, when you write to the filehandle, having set :utf8 on the filehandle, your bytes are automatically encoded to utf8 again. You have doubly-encoded the string and it is quite predictably garbage.

Quoth the utf8 manpage:

Do not use this pragma for anything else than telling Perl that your script is written in UTF-8.

Read the links under the SEE ALSO section of the utf8 docs for more about unicode. I also had this bookmarked, I remember it being pretty good: http://www.ahinea.com/en/tech/perl-unicode-struggle.html

Replies are listed 'Best First'.
Re^2: JSON, UTF-8 and Filehandles
by Kirsle (Pilgrim) on Jul 23, 2011 at 23:33 UTC
    Aha, thanks.

    I went and took a second look at the JSON manpage too.. apparently the utf8 option in JSON already takes the liberty of encoding the output (removing the UTF-8 flag in Perl, making it render as garbage), which is good if you're using JSON.pm to send/receive data over a network socket. Without the utf8 option, JSON still "supports" UTF-8, it just doesn't automatically do the string encoding/decoding.

    I find it a little bit weird that, if you don't use utf8;, and open a filehandle in UTF-8 mode, that wide characters printed to it are mangled; is this because Perl doesn't expect Unicode to be written to the filehandle (because the script printing it didn't use utf8), so it double-encodes it?

      I wrote the first reply in about 10 minutes and had to leave. Let me be more clear about the is_utf8 flag check. Don't use is_utf8. is_utf8 checks if a string is internally encoded in utf8. Deep inside the angry bowels of perl! Using is_utf8 is fraught with peril, which is unfortunate for such a seemingly easy function, right? It doesn't do what you think it does.

      You didn't read the unicode docs did you? Here is a great link: http://perldoc.perl.org/perlunifaq.html#What-is-%22the-UTF8-flag%22%3f. There is also perluniintro, perlunicode, utf8 etc. Feel free to continue screwing yourself by not reading these. Don't forget to not read the link I gave in my first reply.

      Now I have time to reply to your bullets:

      • use utf8 is necessary for writing your source code in utf8. This is only useful for writing string literals in utf8, since there is not yet a snowman operator (perl6?). Your output is probably garbled because your string literal ($umbreon) is written in utf8 and perl has no way to know this without the use utf8.
      • Your terminal/shell is utf8 compatible. It translates everything as utf8. You can print each byte of a utf8 character separately and your terminal would decode it as utf8.
      • The UTF8 is not corrupted. The UTF8 is just fine. You are encoding it twice.
        • You are encoding, decoding, and encoding again.
        • With utf8 turned on, JSON will decode the byte string you provide it from utf8 to perl's internal string representation.

      In response to your latest reply: Stop worrying about the utf8 flag and just worry about encoding once and decoding once. Don't encode with JSON if you are encoding to utf8 before writing to the file. Vice-versa with decode. That's all you need to worry about. Remember, this also applies to STDOUT.

      The wide characters are probably mangled because you are using a utf8 string constant.

        I'll have to do some more tinkering; I seem to remember running into some issues in my website's code where my JSON DB module was double-encoding text it writes to disk, even though no Unicode text was actually hard-coded into the file, but using utf8 made it do the right thing; it was getting all its data from other areas of the code. I was opening the filehandles in utf-8 binmode, getting mangled output and that's why I made the simpler test script just to see what was going on.

        Edit: it seems you're right. :) I made a test script that opens a filehandle in UTF-8 mode, reads it and writes it to a different file in UTF-8 mode, without needing use utf8; in the source file.