|The stupid question is the question not asked|
It likely takes a new Perl programmer a long time to learn how to read and write a Unicode UTF-16 file. The following script would not be obvious to someone just beginning to learn Perl.
Imagine how glorious the neophyte would feel once he had finally figured out the right way to handle Unicode text in Perl after having slogged through the nearly impenetrable Perl Unicode documentation for hours. Then imagine how frustrating it would be for him to run the script and realize it doesn't work. It creates a badly broken and unusable text file (on Microsoft Windows, at least).
But our neophyte is patient and persistent. He Googles for help, and after several more hours of painstaking research and experimentation, he determines the following script works.
The chicanery needed just to read and write a Unicode file on Windows using Perl is absurd. It's much too arcane.
Can someone explain how this sequence of PerlIO layers works? Why must so many layers be used? Can these layers be specified using the open pragma? If so, how? If not, why not? And why has this ancient Perl bug still not been fixed in version 5.12.2?
In fact, the second, more elaborate version of the script is still wrong. The file named Input.txt has a byte order mark in it, so its encoding is actually UTF-16, not UTF-16LE. It seems there's no way to generate a UTF-16 file in little-endian byte order directly. To generate such a file, you have to specify the UTF-16LE CES (which is wrong) and add the byte order mark explictly to make it UTF-16 instead of UTF-16LE.
In reply to Chicanery Needed to Handle Unicode Text on Microsoft Windows by Jim