Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^8: Encoding Problem - UTF-8 (I think)

by Anonymous Monk
on Dec 16, 2015 at 14:23 UTC ( [id://1150503]=note: print w/replies, xml ) Need Help??


in reply to Re^7: Encoding Problem - UTF-8 (I think)
in thread Encoding Problem - UTF-8 (I think)

That's a bit like saying Kim Jong-un is the best leader in NK :)
Not at all! Well, maybe he is? :) But anyway, about UTF-8:
  1. It's easily recognizable. It's just extremely unlikely that you'll get a (not super short) string that just happens to look like valid UTF-8.

    OTOH, things like UTF-16LE, or (especially) one-byte stuff like Latin-1 "look like" complete binary garbage.

  2. Just remove a couple of random bytes from a UTF-8 string, and you'll lose a couple of characters. All others are still there, completely undamaged.

    Remove a couple of bytes in the middle of a UTF-32 string, and the rest of the string IS binary garbage.

One byte encodings are just not general purpose... Since some users want to use all kinds of characters in their documents. Look:

абвгд
Yeah, perlmonks uses a one byte encoding... Windows-1252, I believe.

Now, there could be a self-synchronizing, easily recognizable, fixed-length encoding, but it wouldn't be backwards compatible with 7-bit ASCII. So what did you expect? If it's not backwards, it's not compatible...

Replies are listed 'Best First'.
Re^9: Encoding Problem - UTF-8 (I think)
by BrowserUk (Patriarch) on Dec 16, 2015 at 15:41 UTC
    It's easily recognizable. It's just extremely unlikely that you'll get a (not super short) string that just happens to look like valid UTF-8.

    And if that was the only unicode encoding, it might be a recommendation; but there are a multitude of "unicode" encodings, the rest of which don't share that property.

    Just remove a couple of random bytes from a UTF-8 string, and you'll lose a couple of characters. All others are still there, completely undamaged.

    That's a bit like saying that a fast poison is better than a slow poison because you suffer less. Basically making a feature of an incidental property that has no value in the real world.

    Bytes don't randomly disappear from the middle of files; and streams have low-level error detection/resend to deal with such events. The ability to re-synch a corrupted stream is of little value when it is such a rare event; and entirely not worth the costs of achieving it.

    Remove a couple of bytes in the middle of a UTF-32 string, and the rest of the string IS binary garbage.

    I'm not even sure that is true -- just move to the end and step backwards -- but even if it was, it is again of little relevance because bytes don't randomly disappear from files, and they will be detected and corrected by the transport protocols in streams.

    One byte encodings are just not general purpose... Since some users want to use all kinds of characters in their documents.

    I've never suggesting that we should return to 1-byte encodings; but you have to recognise that variable length encoding undoes 50 years of research into search/sorting/comparison algorithms for no real benefit.

    but it wouldn't be backwards compatible with 7-bit ASCII.

    Recognise that the vast majority of computer systems and users were encoding files in localised ways (8-bit chars/code pages) for many years before the misbegotten birth of unicode and its forerunners; and utf-8 is not backwards compatible with any of that huge mountain of legacy data. Consigning all that legacy data to the dustbin as the product of "devs and users who created garbage" is small-minded bigotry.

    Very few people (basically, only the US and IETF) went straight from 7-bit to unicode. There are huge amounts of research and data that were produced using JIS/Kanji, Cyrillic, Hebrew, Arabic et al, and unicode is not compatible with any of it.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I knew I was on the right track :)
    In the absence of evidence, opinion is indistinguishable from prejudice.
      And if that was the only unicode encoding, it might be a recommendation; but there are a multitude of "unicode" encodings, the rest of which don't share that property.
      Use what works.
      That's a bit like saying that a fast poison is better than a slow poison because you suffer less. Basically making a feature of an incidental property that has no value in the real world.
      Well, maybe disappear doesn't happen that often... What if they appear instead?

      $ touch $'абс\xFF普通话'

      $ ls -l

      Should software deal with it? What should it do? Let's see
      $ echo $'aaa\xFFaaa' | xclip -i # copy to clipboard
      (middle click in the textarea window) aaa�aaa

      Looks like Chromium does the right thing...

      The world is actually full of garbage strings :)
      I'm not even sure that is true -- just move to the end and step backwards
      Well, basically, there is a ton of 'false positives'.

      $ perl -MEncode -mutf8 -e 'printf "%vx\n", Encode::encode( "UTF-16", "ジ" )'

      fe.ff.0.e3.0.82.0.b8
      $ $ perl -MEncode -e 'printf "%vx\n", Encode::decode( "UTF-16", "\xFE\ +xFF\x00\x82\x00\xB8")' 82.b8

      A perfectly good codepoint, unfortunately it's Chinese instead of Japanese...

      (it's so painful to make perlmonks display what I want to display... does anyone have some tips? I use <tt> and <p>, it's a pain)
      I've never suggesting that we should return to 1-byte encodings; but you have to recognise that variable length encoding undoes 50 years of research into search/sorting/comparison algorithms for no real benefit.
      As I said, I see no real benefit in variable length now. Maybe it made some sense when dinosaurs roamed the Earth and modems were 2400 bps.
      Very few people (basically, only the US and IETF) went straight from 7-bit to unicode. There are huge amounts of research and data that were produced using JIS/Kanji, Cyrillic, Hebrew, Arabic et al, and unicode is not compatible with any of it.
      And any of it is not compatible with each other... so it's not general purpose. Is that unreasonable to expect that a typical computer user (not programmer) in 2015 would be able to use Kanji, Cyrillic, Hebrew, Arabic etc in a single document? (and without pain?) That seems a very reasonable feature request...

      No, I don't think it was ever really supposed to make programmers' lives easier. Oh well, c'est la vie.

        fe.ff.0.e3.0.82.0.b8
        Damn, that's not right. I spent so much time and effort trying to make Perlmonks to show some Unicode that I screwed up (-mutf8 should've been -Mutf8). Anyway, the point was: moving back won't help because we can't find where the error was - almost all combinations of bytes are valid codepoints...

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1150503]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (4)
As of 2025-11-12 10:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What's your view on AI coding assistants?





    Results (68 votes). Check out past polls.

    Notices?
    hippoepoptai's answer Re: how do I set a cookie and redirect was blessed by hippo!
    erzuuliAnonymous Monks are no longer allowed to use Super Search, due to an excessive use of this resource by robots.