Because it can be used to store any 72-bit values (well, limited to 32- or 64-bit in practice), not just Unicode code points. You've demonstrated this.
(shrug) Yeah, I've never used that feature.
Perl's ability to use a more efficient storage format when possible and a less efficient one when necessary is a great feature, not an awful one. $x = "a"; $x .= "é"; is no more awful than $x = 18446744073709551615; ++$x;. Both cause an internal storage format shift.
The lack of ability to tell Perl whether a string is text, UTF-8 or something else is unfortunate because it would allow Perl to catch common errors, but that has nothing to do with the twin storage formats.
Again, I agree and don't agree... The assumption that all strings are in one of the storage formats, unless explicitly specified otherwise, is a source of great confusion. Perl's source code (without "use utf8")? Output of readdir? Contents of @ARGV? I don't see how one can not think about implementation details, storage formats, leaky abstractions and other bad things. To me, 'Perl thinks everything is in Latin-1, unless told otherwise' seems like a more useful, understandable explanation.
Unfortunately, Perl does not have the information it would need to have to know you did something wrong.
For some definitions of 'wrong'. If I actually do have Latin-1 (more realistically, ASCII) than it's not 'wrong', is that what you want to say?