For any mapping that includes strings of 3 or more "non-unicode" bytes equating to unicode code points, the arrangement of definitions in the ucm file should have at least one of the 3-byte strings before any of the longer string definitions. That is:
The above arrangement will work, whereas the arrangement below will have the problem as described in the OP:... <Uhhhh> \xhh\xhh\xhh |0 # first mention of any multi-byte mapping ... <Uhhhh> \xhh\xhh\xhh\xhh |0 ...
(updated to remove spurious spaces)... <Uhhhh> \xhh\xhh\xhh\xhh |0 # first mention of any multi-byte mappin +g ... <Uhhhh> \xhh\xhh\xhh |0 ...
Bear in mind that the ucm definitions don't need to be in unicode code-point order -- enc2xs doesn't care about the ordering (except when it comes to triggering this one strange little bug).
It probably applies to any mapping involving 2-byte strings as well, (that would seem logical), but I haven't tested that. In a nutshell, try ordering your definitions with respect to how long the encoded byte strings are, or at least put one instance of a 3-byte string mapping ahead of all instances of 4-byte mappings.
(So it was just "coincidental" that I chose q-mark for the initial attempt in my first reply -- it only worked because it just happened to be conventionally placed above the accented letters.)
|Replies are listed 'Best First'.|
Re^4: Encoding: my custom encoding fails on one character but works for everything else?!
by herveus (Parson) on Sep 14, 2009 at 16:03 UTC