http://www.perlmonks.org?node_id=652869


in reply to Convert Windows-1252 Characters to Java Unicode Notation

my $uni = decode("Windows-1252", $input); $uni =~ s/(.)/sprintf "\\u%04X", ord $1/ge; print $uni, "\n";

If you only want to change certain characters, change the . part to match only what you want to change to \u encoding.

Juerd # { site => 'juerd.nl', do_not_use => 'spamtrap', perl6_server => 'feather' }

Replies are listed 'Best First'.
Re^2: Convert Windows-1252 Characters to Java Unicode Notation
by Jim (Curate) on Nov 25, 2007 at 22:52 UTC
    Thank you very much, Juerd. This worked brilliantly:

    # Convert Windows-1252 characters into Java's Unicode notation... $md->{$column} =~ s{([\x80-\xFF])}{ sprintf "\\u%04x", ord decode('cp1252', $1) }eg;
    Simple and elegant.

    (By the way, the frequency of occurrence of non-US-ASCII characters in the data is very low in relation to the amount of text. So-called 8-bit characters are infrequent and usually occur in isolation.)

    Jim

      You may be better off with a hard coded translation table, for performance.

      my %w1252_to_java = map { chr($_) => sprintf("\\u%04x", decode "Windows-1252", chr) } 0x80 .. 0xff; ... $md->{$column} =~ s/([\x80-\xff])/$w1252_to_java{$1}/g;

      (By the way, the frequency of occurrence of non-US-ASCII characters in the data is very low in relation to the amount of text. So-called 8-bit characters are infrequent and usually occur in isolation.)
      Maybe in English, but not when your data is in French, for example. In French you can easily have one or two accented characters every other word.

      Ain't it typical again that English speaking people automatically assume that the whole world uses only English...

      Well, I'm assuming that now you're just talking about your own, personal case. Yes, in that case it's very likely that accented characters are very rare. Until you start getting an international audience, that is...

      BTW the difference between ISO-Latin-1 and Windows-1252 will most probably be most visible in the so-called "smart quotes", those curly quotes that bend a different way for opening and closing quotes, and the ditto curly apostrophe.

        Bart,

        I read your second paragraph and exclaimed to myself, "Slow down, cowboy!"

        Then I read your third paragraph and regained my composure.

        Curiously in retrospect, the sentence of mine that you quoted originally read: "By the way, the frequency of occurrence of non-US-ASCII characters in my data is very low..." I changed "my" to "the" before posting. I cannot explain why. Maybe I'm subconsciously averse to claiming ownership of others' data.

        The point of my parenthetical remark was just this: In the discrete data with which I'm working, there's a very low proportion of 8-bit characters vis-à-vis the total amount of text. So, for example, the kind of optimization Juerd suggested later really isn't an optimization at all in my specific case. I anticipated the likelihood someone might suggest just such an optimization as his and implied that it would be a false opimization in this instance.

        I attended the Internationalization & Unicode Conference 31 last month in San José. I rubbed elbows with likeminded folk from all over the world who share my interest in languages and software globalization. Like you, I'm sensitive to matters of language and culture bias in software and computing. If I didn't care, I wouldn't have had a reason to post this inquiry in the first place. I would have just let the handful of 8-bit characters become mojibake and called it a day.

        Jim