|more useful options|
Since what you say, what I remembered, and what many of my Google searches say conflict, I ran an experiment. I created this package in the debugger:
and then did this:
*main::BLODGETT does not exist, and *bar::BLODGETT does exist.
This confirms that, despite many webpages to the contrary, bareword filehandles in different packages do not conflict, and I am therefore pleased to have fixed a bug in my internal inside-my-head Perl documentation, which was wrong.
However, it's of interest that the scenario at the CERT secure coding site depends on bareword filehandles; specifically, if someone defines a sub with the same name as your bareword filehandle, that will take precedence, and can lead to the problems noted there.
A programmer probably wouldn't do that to herself, but the potential exists for someone else to do so; lexical file handles actually prevent this vulnerability from working.
As far as the "did I reuse this filehandle before I ought to have?" test, I definitely agree: that *is* hard to write. You could prevent it by having a table of filehandles and currently-open-for-output files, and throw an error if I try to re-open one that is currently open, or do simulated runtime analysis that verifies that there is no path that could lead to the file being reopened before it is closed.
But now we're outside the scope of the question I asked, which was not about whether I could formulate this particular check right, but what "dynamic analysis" meant to BrowserUK. I'm guessing you're both talking about "simulated execution", which puts us into a much more sophisticated kind of analysis. Not impossible to do, but way more sophisticated than Perl::Critic. If that is the case, I'll agree: Perl::Critic can't do that.
It can do some things that are appropriate in large development projects, like verifying you remembered strict. You can certainly do that yourself if you want. I prefer to have mechanical tests handled in a mechanical way, just like I prefer automated tests, even though Test::More can't test every possible situation. We just need to remember and emphasize to more naive programmers that tests don't guarantee the code is right, only that what we've tested passes the tests as they were defined, and is therefore probably not wrong.
I think that a tool that lets me know about stuff that is possibly going to mess me up is useful. Others don't. I think we'll have to agree that we don't agree about this, as I have the feeling that perhaps this isn't about the tool, but instead about approaches to development, and I already know BrowserUK and I have wildly different preferences on that! And that since those are matters of taste rather than fact, arguing about them isn't productive, so I shall stop.