I understanding that to be safe I need to interpret unicode characters I accept only as their smallest unicode representation
It almost sounds as if you're talking about composed form (LATIN CAPITAL LETTER E WITH ACUTE) vs decomposed form (LATIN CAPITAL LETTER E + COMBINING ACUTE ACCENT). Unicode::Normalize can convert between the two. That has nothing to do with FULLWIDTH LATIN CAPITAL LETTER A vs LATIN CAPITAL LETTER A (although the "K" functions might do that).
But if you're trying to avoid spoofs, the advice is to refuse string ("words"?) that have characters from multiple scripts.
The post as it was when I replied:
hi monks
I'm validating some mixed English and Japanese utf-8 input . It sometimes contains a-z A-Z 0-9 entered not only from the common ascii compatible unicode range, but also this unicode range xFF10 - xFF5E
http://en.wikibooks.org/wiki/Unicode/Character_reference/F000-FFFF
for example
A (unicode x0041)
vs
A (unicode xFF21 http://www.decodeunicode.org/u+FF21)
I understanding that to be safe I need to interpret unicode characters I accept only as their smallest unicode representation
e.g interpret xFF21 as x0041 (as in above)
So question is, can I use some function/module of Perl to do this, or do I have to manually convert them with a mapping. All the experimenting I've done so far, it seems like I'll have to manually do it. This surprise me if I is supposed to interpret them in their smaller representation.
cheers for any feedback, sorry for my english
damian
|