Try running this program. Now rename Prunt as Print and watch Perl do something unexpected.
The explanation is that print $fh makes Perl look for Print in the current package before looking in $fh's package, whereas $fh->Print looks only in $fh's class and any base classes.
On another subject completely, this demo program violates another 'always, always, always' rule that people often mention: always pass the class name as the first argument of a constructor and bless the new object into the package supplied to you rather than the current package. I think that's unnecessary. Any derived package can call your constructor and then rebless the new object into itself.
IMHO, one of several problems that make OOP unnecessarily difficult in Perl 5 is that a constructor can't easily and efficiently tell whether it was invoked as pkg->new or pkg::new (in other words, whether the package name was passed as the first arguemnt or not, and therefore where in @_ the 'real' arguments begin). Most people use the former, but the latter is more efficient because it passes one argument fewer. Perl 6 fixes this problem, along with so many others.
"Never disable buffering without a good reason", eh?
I'm actually of the opposite opinion, and I turn it off out of habit. I think this is a case where the default is wrong:
turning buffering on should be regarded as a performance hack.
The trouble is that a lot of beginning perl scripts are
mixtures of back-ticks and perl output -- an early use for me was an attempt at adding some readable column descriptions to the output of a unix command-line utility. What happens with buffering turned on is that you get the output from these two sources intermixed in an almost random fashion. And the reason this is happening is not at all obvious, in fact I would argue it's nearly impossible to figure out -- even if it occurs to you to read the docs for the special variables, there's nothing about the writeup for "$|" that would leap out at you (do I want my "pipes to be piping hot?").
So this is a hard one... if the default were different, perl might have an undeserved reputation for slow output, but as it is there's a nasty little gotcha in there. I tend to shut off $| for all my command line scripts... though of course you probably *shouldn't* do that in something like a CGI script.
I see a lot of programs where people disable buffering when they only print to standard output with newlines (or standard error) and never call external programs. (I've also seen a lot of programs disable buffering when they never printed to that filehandle!) Out of the last few hundred pieces of code I've seen, perhaps 5% needed to disable buffering.
I agree that people should not blindly turn off buffering, but to be honest it will lead to fewer problems for most people than blindly leaving it on. The main issue I have with what you say is that you need to know how the program will be used in the future before you can safely leave it on, and this is not easy to do. For example, buffering will keep a script from effectively being used in pipelines when the data arrives in a time-sensitive manner...
With buffering on, it may be a long time before I see a log message I want (Update: after my_script processes it in some way) even though it's at the end of the log file. If I kill the pipeline I may never see the output I want. Perhaps a lot of people don't do this kind of thing, but I find that I use scripts in pipelines where I wouldn't use them in the past. FWIW, I could turn your argument around and say that you should always turn buffering off unless you know you need performance (i.e. you profiled it), since it could be viewed as premature optimization at the expense of compatibility, but I think that argument is a little strained.