http://www.perlmonks.org?node_id=661519


in reply to How to reverse a (Unicode) string

print scalar reverse "\noäu";

If you entered this using an UTF-8 editor, you forgot to "use utf8;" to notify Perl of this fact.

You may be dealing with the string "\no\x{C3}\x{A4}u" instead of the intended "\no\x{e4}u"!

reverse Works on bytes

reverse works on characters. If you have a bytestring, every character represents the equivalent byte. If you have a Unicode text string, reverse properly reverses based on unicode codepoints.

You can solve this problem by decoding the text strings

This suggests that decoding is a workaround. It is not, it is something you should always do when dealing with text data!

The use utf8; takes care that every string literal in the script is treated as a text string

Perl has no idea, and cannot be told, what kind your strings are: binary or text. Without "use utf8" you don't necessarily have byte strings, but if you have text strings, they're interpreted as iso-8859-1 rather than utf-8. Note that iso-8859-1 is a unicode encoding -- it just doesn't support all of the characters.

The rest of your post is accurate, but I wanted to respond to avoid that newbies get a negative feeling about Perl's unicode support from your post. Perl's unicode support is great, but the programmer MUST learn the difference between unicode and utf-8, and the difference between text data and binary data.

Replies are listed 'Best First'.
Re^2: How to reverse a (Unicode) string
by moritz (Cardinal) on Jan 09, 2008 at 22:27 UTC
    Note that iso-8859-1 is a unicode encoding -- it just doesn't support all of the characters.

    I don't know what you mean by "unicode encoding" (are there encodings that map to non-unicode chars?), but in the perl context it's worth mentioning that iso-8859-1 strings don't follow unicode-semantics by default, the need to be encoded like any other string:

    # this file is stored as latin1 print "ä" =~ m/\w/ ? "Unicode\n" : "Bytes\n"; __END__ Bytes

    Perl's unicode support is great, but the programmer MUST learn the difference between unicode and utf-8, and the difference between text data and binary data.

    Yes, and they have to learn that for any kind of tool that supports Unicode and different encodings.

    And I really like the Perl 6 spec which allows string operations on byte, codepoint and grapheme level ;-)

      I don't know what you mean by "unicode encoding" (are there encodings that map to non-unicode chars?), but in the perl context it's worth mentioning that iso-8859-1 strings don't follow unicode-semantics by default, the need to be encoded like any other string

      It is a unicode encoding, in that after you've decoded the character number, the number maps 1-on-1 to the Unicode space. Don't forget that UTF-8 is just a way of encoding a sequence *numbers*.

      That non-SvUTF8-flagged strings get ASCII semantics in some places, is indeed by design, but that wasn't sufficiently thought through IMO. Note that these strings may get unicode semantics in some circumstances, and ascii semantics in others. The ascii semantics are for charclass and upper-/lower case stuff.

      I consider this a bug in Perl. See also Unicode::Semantics, and expect the bug to be fixed in 5.12.

      And I really like the Perl 6 spec which allows string operations on byte, codepoint and grapheme level ;-)

      Just realise that Unicode strings don't have a byte level :)

        I hesitated before writing about the byte level, but S02 explictly mentions it:

        You can also ask for the total string length of an array's elements, in bytes, codepoints or graphemes, using these methods .bytes, .codes or .graphs respectively on the array. The same methods apply to strings as well.

        And of course you have a byte level if you specify an encoding, or if there is a default one. Just like you can have a language dependent notion of a grapheme if you pick a language.

        But I think the spec should be a bit clearer regarding how the character encoding is chosen.

        [iso-8859-1] is a unicode encoding, in that after you've decoded the character number, the number maps 1-on-1 to the Unicode space.

        By that logic, UTF-8 is not a "unicode encoding". For example, C2 in Unicode does not map to C2 in UTF-8. Your choice of name for this trait is very poor.