http://www.perlmonks.org?node_id=256728

donno20 has asked for the wisdom of the Perl Monks concerning the following question: (files)

How do I determine encoding format of a file ?

Originally posted as a Categorized Question.

  • Comment on How do I determine encoding format of a file ?

Replies are listed 'Best First'.
Re: How do I determine encoding format of a file ?
by graff (Chancellor) on May 09, 2003 at 05:37 UTC
    If you are pulling in HTML pages from the web, there will usually be a tag or attribute in the markup that specifies the intended character set. Grab some pages, inspect the HTML source, and see what those declarations look like.

    If you are being handed "raw" (unmarked) text data, where there is absolutely no clue about what character encoding is being used, you need to have at least one of two things:

    • Correct knowledge about what language the text is written in.

      This will at least limit the number of possible encodings (e.g. Big5 verus GB2312 vs. GBK vs. unicode for Chinese), and if you become familiar with the different properties of established encodings for that language, working out logic to identify them is usually not a big problem.

    • Trained models of various encodings as applied to various languages.

      In particular, if you take samples of known data (where you're confident about the language and encoding), you can look at the frequency rankings or probabilities of byte n-grams -- usually bigrams will suffice, i.e. how many times each pairing of bytes occurs in a given set of data (the word "hello", in ASCII, has four bigrams: "he", "el", "ll", "lo").

    If you don't know what language is being used in raw text, there may be statistics that would tell you whether it's likely to be using single-byte or multi-byte encoding, but from my perspective, this is a research question (I'm not that good at statistics).

    Perl 5.8 has a module called "Encode::Guess", which might work well if you know the language involved and/or can provide some hints as to the likely candidates. (I haven't tried it yet, but it is admittedly limited and speculative at present.)

Re: How do I determine encoding format of a file ?
by idsfa (Vicar) on Apr 11, 2006 at 16:37 UTC

    File::BOM provides get_encoding_from_filehandle and get_encoding from_stream to identify the encoding of Unicode files. Example:

    use File::BOM qw( :all ); open $fh, '<', $filename; my ($encoding) = get_encoding_from_filehandle($fh);
Re: How do I determine encoding format of a file ?
by particle (Vicar) on May 10, 2003 at 02:35 UTC
    have a look at File::MMagic, it guesses the filetype given the filename or a filehandle, and is quite configurable (you can add more file type descriptions based on regular expressions.) it's a handy little module.
Re: How do I determine encoding format of a file ?
by Anonymous Monk on Mar 29, 2004 at 19:22 UTC
    BOM for UTF-8 seems to be optional. What if we dont have BOM set for UTF-8 encoding file?

    Originally posted as a Categorized Answer.

Re: How do I determine encoding format of a file ?
by donno20 (Sexton) on May 09, 2003 at 01:53 UTC
    Read the first two bytes of the file. Corresponding encoding and hex codes are as follow:
    unicode Little Endian = "\xFF\xFE"
    unicode Big Endian = "\xFE\xFF"
    utf8 = "\xEF\xBB"
    ASCII = straight to content
      Maybe in a perfect world... Certainly, there's a reasonable chance that UTF-16(BE|LE) really will start with the "byte-order-mark" (BOM, \x{FEFF}), but oh, so many people are not so reasonable. I've seen UTF16 files created by Microsoft tools that were little-endian (of course) but had no BOM.

      As for utf8 files, um, where did this information about "\xEF\xBB" come from? I've never seen a file that starts like that (and I would have thought that any proper utf8 mechanism would barf given this sort of byte sequence -- an initial "\xEF" would dictate the start of a 3-byte character, but you don't indicate a third byte). If you mean \x{EFBB} (expressed as UTF-16), this would be three octets when converted to utf8: \xEE\xBE\xBB).

      Regarding the notion of ASCII data, many people don't realize that ASCII files are simply a proper subset of utf8 files -- this was, I think, one of the design goals for utf8. (This adds to my doubts about "\xEF\xBB": this sequence isn't supposed to be in an ASCII file, yet an ASCII file is supposed to work as a utf8 file.)

      I think the question, though vaguely stated, may have been more concerned with distinguishing, say, the different flavors of ISO-8859 (which is impractical without knowledge of the language being used in the text, or at least some well-trained n-gram models for various languages), or CP12* vs. Mac* vs. 8859-* vs. euc-*, etc, etc (somewhat less speculative, but still not always simple or deterministic without modeling).

      Perhaps more clarification is needed about the scope of the question. ("raw" text files? HTML/XML files? files that are simply "some form of unicode, but I don't know which"?)