First of all, ignore any explanation that mentions latin-1. Perl doesn't know anything about latin-1.
The lack of decoding of inputs plus the lack of encoding of outputs means the bytes were copied through. So if your input was encoded using cp1252, and if this is output to a terminal (or browser) expecting cp1252, it works.
The problem with this approach is that lots of tools expect decoded text (strings of Unicode Code Points), not encoded text (string of cp1252 bytes).
- /\w/ will fail to work properly.
- uc will fail to work properly.
- length might not do what you want (for some encodings).
- substr might not do what you want (for some encodings).
- In Detail:
Perl expects the source file to be encoded using ASCII (no utf8;) or UTF-8 (use utf8;). That said, when expecting ASCII (no utf8;), bytes outside of ASCII in string literals produce a character with the same value in the resulting string.
For example, say Perl expects ASCII (no utf8;) and it encounters a string literal that contains byte 80. This is illegal ASCII, but it's "€" in cp1252. Perl will produce a string that contains character 80. If you were to later print this out to a terminal expecting cp1252 (without doing any form of encoding), you'd see "€".
- EBCDIC machines expect EBCDIC and UTF-EBCDIC rather than ASCII and UTF-8.