Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?

Understanding and UTF-8 handling

by Anonymous Monk
on Jul 13, 2007 at 15:43 UTC ( #626470=perlquestion: print w/replies, xml ) Need Help??
Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

From my reading of Advanced Perl Programming 2nd ed., best practice for utf8 handling is to mark utf8 data a such as it comes into your program and mark file handles as being utf8 when they are expected to read or write utf8.

I am using to process a form POST that sometimes contains utf8 data (which is urlencoded). The problem is that data retrieved by $q->param('foo') does not have the utf8 flag set when it contains utf8 data.

If I call $q->charset('utf-8') then the data is marked as utf8. But why should I have to do this?! Shouldn't the client tell the server what kind of data is being sent and do the right thing automatically?

Perhaps part of the problem is that I don't understand the content-type x-www-form-urlencoded. To me that content-type says nothing about what type of characterset the urlencoded data is representing.

Finally, is there any harm in calling $q->charset('utf-8') in all cases? I hate to do it but it seems necessary.

Thank you for any help.

Replies are listed 'Best First'.
Re: Understanding and UTF-8 handling
by ikegami (Pope) on Jul 13, 2007 at 17:06 UTC

    HTTP is all about transfering documents. The type of document is indicated by the Content-Type. Some examples are text/html and image/jpeg. HTTP doesn't know, care or have any fields to indicate the character encoding used by the document. Technically, that doesn't even make sense because a document could use multiple forms of encoding. It's up to the document to provide any informatino the receiver needs to interpret it.

    Which brings us back to application/x-www-form-urlencoded. Form data is just another document to HTTP, since that's the only thing it understands. And it's a pretty aweful document format for international information exchange. It doesn't provide any information on the character encoding used, so CGI doesn't have any information on which to act. (I wonder if multipart/form-data is better at this.)

    One backward-compatible solution would be to allow/require the encoding to be specified as Content-Type parameters, just like HTML does (e.g. text/html; charset=ISO-8859-5). This was never done.

    Instead, HTML provides a means of requesting a specific character set by means of the accept-charset parameter of the FORM element*. Any correct browser will encode the data using the specified encoding. The recommended default is the same encoding as the one of the page containing the form.

    * — You can technically specify multiple encodings to let the browser pick one, but the browser has no way of communicating which encoding it used. A trick you could use is to include a string in a hidden field which gets encoded differently by each character encoding. For example, the BOM character would distinguish the various UNICODE encodings.

Re: Understanding and UTF-8 handling
by daxim (Chaplain) on Jul 13, 2007 at 17:39 UTC
    it is beyond the power of words to describe the way HTML browsers encode non-ASCII form data
    Unfortunately, both standardization and implementation are still a huge mess here
    UTF-8 and Unicode FAQ

    You can detect the encoding by adding a hidden field with bits of magic data it in. Compare what arrives at the server with a table of precomputed results for lots of encodings.

Re: Understanding and UTF-8 handling
by Juerd (Abbot) on Jul 15, 2007 at 14:35 UTC does not decode or encode. The $q->charset method only sets the character set for the Content-Type header.

    This means that you have to decode and encode manually (e.g. by using PerlIO layers). Decode everything you got, and encode everything you're about to send.

    URL encoded data is byte data, typically without a way to indicate which encoding was used. With POST requests, a charset attribute may be present with the Content-Type: application/x-www-form-urlencoded, but the standard does not require it, or tell you what the default is. In fact, most often, even if it is present, it is ignored.

    Query strings and form data are usually encoded with the same encoding (charset) that was used on the HTML page that has the form, but it may not be. My advice for those who have standardized on UTF-8, is to try UTF-8 decoding first, and if it's not valid UTF-8, to use ISO-8859-1 instead.

    Note that you MUST NOT use "utf8" when decoding CGI data. It does not actually decode, and as such skips sanity checks. It may cause internal corruption and security bugs. Instead of "utf8", use "UTF-8". does not decode or encode.

      Not true.

      I would love to hear everyone's thoughts on this, since I'm not a unicode guru.


        $cgi->charset( "utf-8" ); Has done the trick for me. I was having trouble reading UTF-8 character from the POST.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlquestion [id://626470]
Approved by Corion
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others making s'mores by the fire in the courtyard of the Monastery: (8)
As of 2017-05-27 21:26 GMT
Find Nodes?
    Voting Booth?