|
|
| Welcome to the Monastery | |
| PerlMonks |
Re^9: Encoding Problem - UTF-8 (I think)by BrowserUk (Patriarch) |
| on Dec 16, 2015 at 15:41 UTC ( [id://1150510]=note: print w/replies, xml ) | Need Help?? |
|
It's easily recognizable. It's just extremely unlikely that you'll get a (not super short) string that just happens to look like valid UTF-8. And if that was the only unicode encoding, it might be a recommendation; but there are a multitude of "unicode" encodings, the rest of which don't share that property. Just remove a couple of random bytes from a UTF-8 string, and you'll lose a couple of characters. All others are still there, completely undamaged. That's a bit like saying that a fast poison is better than a slow poison because you suffer less. Basically making a feature of an incidental property that has no value in the real world. Bytes don't randomly disappear from the middle of files; and streams have low-level error detection/resend to deal with such events. The ability to re-synch a corrupted stream is of little value when it is such a rare event; and entirely not worth the costs of achieving it. Remove a couple of bytes in the middle of a UTF-32 string, and the rest of the string IS binary garbage. I'm not even sure that is true -- just move to the end and step backwards -- but even if it was, it is again of little relevance because bytes don't randomly disappear from files, and they will be detected and corrected by the transport protocols in streams. One byte encodings are just not general purpose... Since some users want to use all kinds of characters in their documents. I've never suggesting that we should return to 1-byte encodings; but you have to recognise that variable length encoding undoes 50 years of research into search/sorting/comparison algorithms for no real benefit. but it wouldn't be backwards compatible with 7-bit ASCII. Recognise that the vast majority of computer systems and users were encoding files in localised ways (8-bit chars/code pages) for many years before the misbegotten birth of unicode and its forerunners; and utf-8 is not backwards compatible with any of that huge mountain of legacy data. Consigning all that legacy data to the dustbin as the product of "devs and users who created garbage" is small-minded bigotry. Very few people (basically, only the US and IETF) went straight from 7-bit to unicode. There are huge amounts of research and data that were produced using JIS/Kanji, Cyrillic, Hebrew, Arabic et al, and unicode is not compatible with any of it. With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
In the absence of evidence, opinion is indistinguishable from prejudice.
In Section
Seekers of Perl Wisdom
|
|
||||||||||||||||||||||||||||||||||||||