"\x{E4}" =~ /\w/
then what encoding is assumed for the following?
"\x{2660}" =~ /\w/
It never deals with any encoding. It always deals with string elements (characters). And those string elements (characters) are assumedrequired to be Unicode code points.
- Character E4 is taken as Unicode code point E4, not some byte produced by iso-8859-1.
- Character 2660 is taken as Unicode code point 2660, not some byte produced by iso-8859-1.
It's entirely up to you to create a string with the right elements, which may or may not involve character encodings.
Or what do you think it is, if not ISO-8859-1?
A Unicode code point, regardless of the state of the UTF8 flag.
- Character E4 (UTF8=0) is taken as Unicode code point E4, not some byte produced by iso-8859-1.
- Character E4 (UTF8=1) is taken as Unicode code point E4, not some byte produced by iso-8859-1.
In short, you're over complicating things. It's NOT:
Each character is expected to be an iso-8859-1 byte if UTF8=0 or a Unicode code point if UTF8=1.
It's simply:
Each character is expected to be a Unicode code point.
|