If you are pulling in HTML pages from the web, there will usually be a tag or attribute in the markup that specifies the intended character set. Grab some pages, inspect the HTML source, and see what those declarations look like.
If you are being handed "raw" (unmarked) text data, where there is absolutely no clue about what character encoding is being used, you need to have at least one of two things:
- Correct knowledge about what language the text is written in.
This will at least limit the number of possible encodings (e.g. Big5 verus GB2312 vs. GBK vs. unicode for Chinese), and if you become familiar with the different properties of established encodings for that language, working out logic to identify them is usually not a big problem.
- Trained models of various encodings as applied to various languages.
In particular, if you take samples of known data (where you're confident about the language and encoding), you can look at the frequency rankings or probabilities of byte n-grams -- usually bigrams will suffice, i.e. how many times each pairing of bytes occurs in a given set of data (the word "hello", in ASCII, has four bigrams: "he", "el", "ll", "lo").
If you don't know what language is being used in raw text, there may be statistics that would tell you whether it's likely to be using single-byte or multi-byte encoding, but from my perspective, this is a research question (I'm not that good at statistics).
Perl 5.8 has a module called "Encode::Guess", which might work well if you know the language involved and/or can provide some hints as to the likely candidates. (I haven't tried it yet, but it is admittedly limited and speculative at present.)