What are you talking about? It has nothing to do with Perl. "e" is formed from the code point U+0065, "é" is formed from code point U+00E9 or from code points U+0065 + U+0301, etc. This is defined by The Unicode Consortium, not by Perl.
And the idea that it's OK to treat OCTET 0xE7 as a substitue for code point U+00E9 is totally not defined by the consortium.
No, the input must be a string of integers in 0..255, which it is. print has no problem storing those as bytes. iso-latin-1 doesn't factor into it.
OMG. Who cares what print expects. Even Perl (in other parts) thinks that that's ridiculous.
perl -wE 'say "ç" + "ç"'
The operator plus expects numbers, just like print, right?
If you claim that iso-latin-1 is used, then you claim that use utf8; produces iso-latin-1. It doesn't. It produces Unicode code points.
Printing UNICODE STRINGS (and Perl CAN tell the difference between binary and unicode) on binary STDOUT produces a sequence of octets ENCODED as Latin-1 for code points 0 - 255. The Consortium totally wouldn't approve of that. And that's it. It appears you just don't like the word 'encoding'. Most people would still Perl's behavior 'encoding', that word is certainly good enough for me. You (MAYBE) would've had a point if Perl actually stored unicode codepoint U+00E7 as an octet 0xE7 internally. But we know that it doesn't anyway. Have a nice day.