If you use "W" with unpack, then it will behave the same as ord.
Is there any point in checksumming using unicode ordinals?
Sum-the-bytes checksums are pretty useless -- you can perform any transpositions, shuffle or reverse the entire string and detect nothing -- that's why CRC's and Adler etc. were invented.
The only (scant) merit of sum-the-bytes is that it is very fast. What would be achieved by slowing that to a crawl by forcing it to pick its way through the technical abortion that is multi-byte character encodings? You certainly aren't going to gain any greater guarantee of integrity.
My gut feel is that as the are so many different "unicode standard" encodings out there in the wild, the chances of getting false positives from undetected transmission errors using sum-the-ordinals values, is far higher then using sum-the-bytes values.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
sum unpack 'W*', decode 'UTF-8', $utf8
that wouldn't also affect
sum unpack 'C*', $utf8
| [reply] [d/l] [select] |
I don't understand this. I can't think of any issue that would affect sum unpack 'W*', decode 'UTF-8', $utf8 that wouldn't also affect sum unpack 'C*', $utf8
Let's start with the simple case.
With bytes, the sum-of-bytes checksum will detect any single bit corruption in the string.
Can you say that there is no single bit corruption that cannot cause (say) a 4-byte encoded character to be seen as (say) a valid sequence of one 1-byte encoded character and one 3-byte encoded character; or a valid sequence of two 2-byte encoded characters; who's code-points happen to sum to the same numeric value as the code-point of the original uncorrupted 4-byte character?
Argument2:
With 2-bit characters each of the four characters can be transposed to 2 other characters as a result of a single-bit corruption, giving 4 possibilities for false positives for every 2-characters in the string. For 3-bit characters that becomes 8 possibilities for every 2 bytes
With 8-bit bytes values, that becomes 256 possible undetectable pairs of single-bits corruptions for every 2 characters in the string. (Which is what makes sum-the-bytes checksumming so dire.)
With 1,112,064 code-point values, there are obviously going to be far more permutations. It isn't going to be be direct 2**1112864 because many of the corruptions will result in invalid characters, which will reduce the total. But then there are the possibilities of single characters becoming 2 valid characters or two becoming one (which cannot happen with bytes). So, whilst many of those n-byte to (valid) m-byte corruptions won't sum to the the same value, some will.
Overall, it is my gut-feel assessment that the possibilities for undetected self-cancelling single/double/tripe/quad/quin/.... corruptions is far, far higher. (An interesting problem to try and verify this assertion!)
For that reason, amongst others, there seems to be no benefit in calculating checksums in terms of Unicode ordinals rather than bytes.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |