http://www.perlmonks.org?node_id=555925


in reply to Re^4: How is perl able to handle the null byte?
in thread How is perl able to handle the null byte?

Is there no possibility that when encoding a string to one of the many forms of unicode for output to an external system that there might legitimately be null bytes embedded within the string?

The only forms of unicode that can involve a null byte as part of a non-null character are the 16-bit encodings: UTF-16LE and UTF-16BE.

"UTF-16" without the byte-order spec refers either to "whatever the native byte-order is on the current cpu" or to a data file encoded as 16-bit unicode characters and having a byte-order-mark (BOM, U+FEFF) as the very first character, so that unicode-aware readers know whether they need to swap bytes in order for their current cpu to see the intended 16-bit character values. Think of UTF-16* as if it were 16-bit PCM audio data: you need to handle it in two-byte chunks, and you need to know which of the two is the "least significant byte"; if you treat it as just bytes, anything can happen.

So it's a pretty nice feature that Perl uses utf-8 as its internal string representation, and not utf-16. This encoding is analogous to uuencoding or base64 encoding, though it's actually a bit more clever: the intention is to convey 16 bits worth of data using only a limited range of possible byte values, but the number of bytes needed to convey that value will tend to be fewer for the "simpler" characters (those in the lower range of the 16-bit space) than for the "heavier" characters (those in the higher range).

Because of the design, ASCII characters (00-7F) remain single-byte characters in utf-8; code points U+0080 through U+08FF need two bytes, and from U+0900 through U+FFFF you need three bytes. In the multi-byte "wide" characters, all bytes have their high bits set, so as not to be confusable with ASCII. (The "Unicode Encodings" section of the perlunicode man page provides all the details quite nicely.)

Of course, the whole notion of "wide characters" in C has the same status as the notion of "strings" -- i.e. it's a convenient fiction; that's why all the pre-unicode wide-character encodings (for Chinese, Japanese and Korean) never used a null byte as a component of a multi-byte character.

(<update> Regarding this question: I feel sure that some of the MS wide character sets contain some characters where one half of the 16-bit values can be null... -- Well, now that you mention it, I've looked at hex dumps of Word files containing unicode characters, and they actually alternate at block boundaries (2KB blocks, I think, but I forget) between single-byte character encoding for blocks that don't contain wide characters, vs. UTF-16LE encoding for blocks with wide characters in them. Pretty scary stuff -- I would call it brain-damaged. But none of the 2-byte "legacy" MS/DOS code pages (e.g. CP936) ever used null bytes as part of a code point. </update>)

I guess if people wanted to pursue the notion of granting special status to character strings in order to enable some sort of trap or check for "embedded-null-byte", there would have to be a flag on the SV that says "this is a character string (so if you see an embedded null byte, that would mean something is wrong)."

Since SV's are used to store all kinds of stuff, some of which is expected to include null bytes by nature, there would have to be something similiar to the utf8 flag, that says "this is really character data, and I'd be worried if there were a null byte in it". Then, every SV-to-char* operation would need to know whether the char* is going to be used as a character string in C, and if so, check that flag.