http://www.perlmonks.org?node_id=808686


in reply to Re^7: any use of 'use locale'? (source encoding)
in thread any use of 'use locale'?

Thank you, Alexander! It was widening, but did not answered to my most important questions. You answered like i suppress unicode to everyone and to everywhere. That's not my goal.

I'd like to have a sandbox in Perl, where unicode were treated naturally.

Trying make it more clear. I am not familiar with Perl history so good, but let me make assumption, that in some phase there was no strict-pragma. OK? Then someone thought, it may be good idea and found ways to implement it. Did that break any earlier code? I don't think so. But it made available widely use strict pragma.

So i am talking now. As far as i see, for module authors is there no possibility to see, do the module caller uses utf8 or not. Am i correct? And, does it break any earlier code, if they would have such a possibility? That would be a single step, IMHO :)

What operating system can currently provide perl with a complete Unicode environment (%ENV, @ARGV, STDIN, STDOUT, STDERR, open, opendir, mkdir, rmdir, unlink, ...)?

I have not deeply investigated, how unicode-proof is Linux for now, but on system level i have'nt any complains already years (Debian and Kubuntu). If you could me give some hints, how determine unicode use, i'd like to test it.

Nõnda, WK
  • Comment on Re^8: any use of 'use locale'? (source encoding)

Replies are listed 'Best First'.
Re^9: any use of 'use locale'? (source encoding)
by afoken (Chancellor) on Nov 23, 2009 at 16:19 UTC
    You answered like i suppress unicode to everyone and to everywhere. That's not my goal.

    No, and I did not understand you like that. I just wanted to explain why Unicode support still sucks so much. It's not just a Perl problem, some big problems with Unicode are outside our control. I would be happy if I could use utf8_for_everything;, but that cannot work today. We could end implementing use utf8_where_possible; plus the same manual fiddling to turn on Unicode in subsystems that do not (yet) understand use utf8_where_possible; or that need workarounds.

    I'd like to have a sandbox in Perl, where unicode were treated naturally.

    Unicode is largely treated naturally in Perl (at least since 5.8.1). You put binary data, a string with legacy encoding, or a string with a Unicode encoding into a scalar and everything works as expected. All the magic happens behind the scenes. Things become ugly as soon as you start interfacing with the outside world, e.g. STDIN, STDOUT, STDERR, %ENV, @ARGV, and external libraries (XS code). DBI and the DBD::* modules currently gain more and more Unicode support, simply by reviewing and changing every place in the code where Perl scalars become C strings and vice versa to respect or to set the internal Unicode flag of the Perl scalar. Sometimes by passing a Perl scalar further down into the code instead of passing a C string (this happened with some internal DBI APIs). Sometimes by converting Perls idea of Unicode to and from what the operating system or a library expects. (This happens in DBD::ODBC.)

    Trying make it more clear. I am not familiar with Perl history so good, but let me make assumption, that in some phase there was no strict-pragma. OK? Then someone thought, it may be good idea and found ways to implement it. Did that break any earlier code? I don't think so. But it made available widely use strict pragma.

    This is part of your problem understanding the problems of Unicode support. Perl has a long history and culture of NOT breaking old code. use strict is an example for this. The inventors of strict could have turned strict on by default, and force people to update the legacy code by either adding no strict or by cleaning up the old junk. This would perhaps reduced the ammount of bad Perl code a lot, and would have forced newbies to write cleaner code. But many people would have gotten very angry because millions lines of code would stop working from one day to the other, just because the lastest f*** Perl update started bean counting instead of getting the job done.

    The same thing happened with Unicode support, and you will find some good explainations inside the Perl documentation why Unicode support is largely OFF by default. Turning it on by default would have broken even more code that assumes a character is a byte.

    So i am talking now. As far as i see, for module authors is there no possibility to see, do the module caller uses utf8 or not. Am i correct? And, does it break any earlier code, if they would have such a possibility? That would be a single step, IMHO :)

    Wrong problem. The module caller may use a mix of Unicode, legacy encoding and binary data at any time. For any function or method in a module or class, it is completely irrelevant if "the caller uses utf8" or not.

    Modules (or better: their authors) must no longer assume that scalars contain bytes, they contain arbitary large characters. length returns the number of characters in a scalar. If the internal Unicode flag on a scalar is turned off, the module may safely assume that the scalar contains bytes, either binary data or a legacy encoding. When it is on, it must correctly handle large characters. When interfacing with the outside world (O/S, network, database, ...), it may be necessary to convert the large characters to a different encoding (and back, of course). Whenever scalars are returned, they may either have the Unicode flag set and may contain large characters, or they have the flag cleared and must not contain large characters, not even as a UTF byte stream. (Except, of course, the purpose of the function is to generate UTF byte streams.)

    Many modules do not need changes, because they did not assume byte==character from the beginning, and so Perl automatically does the right thing. Some modules tried to handle Unicode all by themselves even before Perl hat Unicode support, Template::Toolkit seems to be such a module. They mostly work, and as long as you don't mix them with really Unicode-capable modules, nothing wrong happens. Their only problem is that their scalars contain UTF byte streams instead of large characters. This can only be solved by either dropping support for legacy Perl versions (i.e. use 5.008_001) or by having the module code behave different for old and new Perl versions.

    I have not deeply investigated, how unicode-proof is Linux for now, but on system level i have'nt any complains already years (Debian and Kubuntu). If you could me give some hints, how determine unicode use, i'd like to test it.

    OK, some simple problems. "Unicode string" here means a string containing characters outside the ASCII and ISO range, e.g. the smiling face, cyrillic letters, or the like. See http://cpansearch.perl.org/src/MJEVANS/DBD-ODBC-1.23/t/40UnicodeRoundTrip.t for examples. "Legacy string" means a string any non-Unicode encoding, like ASCII, ISO-8859-x, the various asian encodings, and so on.

    • Create two environment variables named FOO and BAR, one with a Unicode string of exactly 10 characters as value, the other one with a legacy string of exactly 10 characters. Choose randomly which variable gets the Unicode string. fork() and exec() some child processes (Scripts in bash, perl, ash, ksh, python, ruby, lua, ..., and perhaps some compiled programs written in C, C++, Assembler, Fortran, ...) and let each process report the number of characters in both FOO and BAR, without(!) telling the child processes which of the two variables actually contains Unicode characters.
    • Create a Perl script that writes randomly either a legacy string or a Unicode string to STDOUT, both containing exactly 10 characters. You may use binmode STDOUT,':utf8' and the like to get rid of all warnings and errors. Create a second program (in Perl or any other language) that reads its STDIN and reports the number of characters it read from STDIN. Connect both programs using a pipe, like this: perl writer.pl | perl reader.pl.
    • Create a Perl script that randomly selects either a legacy string or a Unicode string of exactly 10 characters and passes that string as only argument to child processes written in various langaues. Each child process must report the number of characters passed as arguments.
    • Create a file whose name is a Unicode string. Does ls display it correctly? On both the console and via telnet and ssh from different other operating systems? Can rm remove it without resorting to rm -rf *? Can you copy and move it inside midnight commander? Does your preferred X environment display the name correctly? On the desktop and in the file manager? Even inside File-Open dialogs? Even in programs that are not part of the X environment (like Firefox)? What about other X environments (Gnome, KDE, xfce, ...)? Can you pass the file as a command line argument to arbitary programs and can they open it? Does the filename still look ok when you share the file via FTP, HTTP, SMB, NFS, rsync? Can you still open it over the network from Linux, Windows, *BSD, Solaris, MacOS? Can you overwrite it over the network?

    Yes, this are stupid little tests, except for the last one. They are very similar to what the UnicodeRoundTrip test linked above does. You would not believe how often that simple test broke. And before I added the test, I had even more problems with data that was modified somewhere between Perl and the database engines, causing other tests to break very misteriously or even to fail silently.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
      I'd like to have a sandbox in Perl, where unicode were treated naturally.
      Unicode is largely treated naturally in Perl (at least since 5.8.1).

      Agreed. But to get "sand" (characters) to this box is painful. AFAIU, we have already today in Perl all (at least most) needed pieces to control bits coming from outside world. (And let us expect, that OS is perfect, because it is out of our control.) Those pieces are OUT THERE, but not together.

      I can use them together, but it took too much time to put the puzzle together. I am still not sure, are they now correctly on place and whole thing is too fragile. If in Perl development something changes, i may got broken code as well. For example: for me was great solution to use -C on first line, but from 5.10 it was deprecated. What should i do?

      Instead of such puzzle i'd like to have something, which takes those technics correctly together, so beginner or any Unicode user could just say something like:

      use utf8_everywhere;

      How could it break any older code?

      Btw, i am still testing your last test block (first 3 are dependent of Perl which i don't trust completly itself). No major problems so far with files (named 'zzzⲊфӨ✺☻.txt' and 'zzzⲊфӨ✺☻.svg'), but i have limited network possibilities for now and no other OSes except Kubuntu. One tiny problem so far: Padre (!) file dialogs do not use my locale to sort files. So far i am pretty sure, that Linux main distros in core (system level) are Unicode ready, even if we can find some apps or other OSes which can't act together.

      Nõnda, WK
        Btw, i am still testing your last test block (first 3 are dependent of Perl which i don't trust completly itself)

        The first test does not depend on perl, you can use any language. And for the next two tests, I just was lazy. It is ok to use some other language you trust instead. Use C if you have no better idea, or Ruby, Lua, FORTRAN, Pascal, Modula, bash, ksh, Postscript, whatever you like and has the required features.

        [...] so beginner or any Unicode user could just say something like: use utf8_everywhere; How could it break any older code?

        Pretending it would currently be possible to implement "use utf8_everywhere", it would break every single piece of code that assumes characters are bytes. Just look at people who write perl scripts on Unix-like systems that read binary data. open-read-close and open-write-close worked fine for the last few decades, there never was a need to insert a binmode statement. You could even read binary data from STDIN and write it to STDOUT and STDERR. Of course, if you wanted to be extra-sure or thought of porting the script to classic MacOS, the Microsoft world, or some strange IBM machines, you would insert binmode. But in the real world, binmode is not used everywhere where it should have been used, and the code works flawlessly. Forcing UTF-8 semantics on STDIN, STDOUT, STDERR and all filehandles you open until you explicitly turn it off (using binmode), and even forcing UTF-8 semantics on from code you did not wrote (scripts and modules using your module or sourcing your script), will break all binary I/O severely. Remember, you defined "use utf8_everywhere" to work for the entire process and without exceptions.

        As I tried to explain, it is currently impossible to implement "use utf8_everywhere", simply because the world outside of perl(.exe) is not yet ready to handle Unicode. The first three tests will clearly demonstrate that.

        Assume you read four bytes 0x42 0xC3 0x84 0x48, e.g. from STDIN, a file you opened, an inherited file handle, a command line argument or an environment variable. How many characters do these bytes represent? Explain why.

        Possible answers:

        • 3, because it's a UTF-8 encoded string, containing the letters B, Ä, and H.
        • 4, because it's a legacy encoded string, containing the letters B and H and two non-ASCII letters between them.
        • 4, because it's a EBCDIC-273 encoded string, containing the letters â, C, d, and ç.
        • 0, because it's binary data from a larger stream, encoding the 32-bit integer 0x4884C342
        • 0, because it's binary data from a larger stream, encoding the 32-bit integer 0x42C38448
        • 0, because it's binary data from a larger stream, encoding two 16-bit integers
        • 42, because it's a 32-bit handle of a GUI resource string of 42 characters
        • 2, because it's a legacy encoded string using two bytes per character
        • 1.33333, because it's a string encoded in a future 24-bit encoding.
        • 0.5, because it's a string encoded in an ancient martian charset ;-)

        As you see, it depends on context. For most cases, the operating system does not give you any context information. And for most operating systems, APIs, command line parameters, environment variables, and of course, file I/O are defined in terms of bytes, not in terms of characters. For the environment and command line parameters it is relatively safe to assume that the bytes represent some characters, but you don't have any idea which encoding is used. It could be a legacy encoding, it could be UTF-8, or something completely different, like EBCDIC. If you compile a program to run on Windows using the wide API ("Unicode application"), environment and command line are encoded as UCS-2 (or UTF-16LE, if Microsoft updated the API spec since the last time I read parts of it).

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)