|The stupid question is the question not asked|
Can I assert that computer science is not your primary background as well?
Your instructions on narrowing down the source of a problem are true for simplistic programs in simplistic environments, but those are not the environments of today's computer systems. An update introduced either because I installed an update for an unrelated product like photoshop or such, or because of some permitted OS update, (manual or automatic), can easily change the behavior of installed programs.
In Windows7 programs are sensitive to what order they are loaded in memory and what else is loaded at the same time. It's very rare that they interact -- as we both 'know', they are not "supposed" to interact. But I have seen it happen due to random memory allocation by windows. Such things are rarely repeatable once the offending programs are terminated or even the machine is rebooted and the same programs loaded. Something left over in memory caused a different behavior.
In a perl program -- the larger the program, the more likely it will be that any memory corruption or leak in perl will later affect something else at random. Even the size of the file read in, could affect how it interprets things early on. There are tons of subtle bugs that might only. In real time testing, you don't just run a test once. You run it many times.. It may be the system only crashes after the 100, or 1000, or 10,000,000th execution.
In my later career, with my time spent in writing linux kernel security functions, crashes would occur after some random time of execution -- a timing bug that only happened ever so rarely. How do you debug it? By adding debug lines to the code, you are likely to make it go away. By removing functionality from the code, you may eliminate evidence of the problem, but not the cause. You may have just made it 10,000 times less like to occur -- which for an OS that is to be running 24/7, is still unacceptable. The way I attacked one such problem was by making the code less readable by optimizing the hell outof it. Because initially, it forced the crash to occur more often -- until I found what was likely the cause -- but no one was SURE it was the cause, because you couldn't remove it and still have the program work but changing it might simply hide it more thoroughly. Eventually I felt good about the code executing 20x faster at near the physical limits of the machine, for ... well I wanted 6 days, but my boss wasn't willing to wait for more than 3 days. He still felt I was needlessly optimizing the code -- and resented taking almost an extra 2 weeks to find a code that a more senior engineer gave up on finding but code that needed to be run for hours to a day at a time when he gave up.
Perl is no where near the complexity of a kernel, but it is easily approaching the level of a compiler where "at a distance" type bugs start being noticeable.
Coverity released a study on open source code quality and compared it to proprietary source code products. The defects/line of code were about the same between projects of the same overall size. Defects grew/line proportional to the size of the project. They did mention that open source projects tended to be smaller for the same types of functionality than their commercial counterparts, thus overall, had fewer bugs for products of the same type.
That means perl is not not immune to the same tendencies of having defects/line of code as any other project. And that as it grows, it will develop bugs similar to other projects of it's size. Programs do develop bugs where cause and effect don't happen together or due to only a few factors but sometime several and some random 'salt' thrown in on the random number generator for good measure.
That's my recent, largest code experience -- where simplifying something to just headers, would never display the bugs.
However, you are more like to end up with some seemingly innocuous change making huge, irrational changes in perl's error behavior (like in my response ^4, above). Indicating the problem is not one of 'simplicity', but one of complexity causing perl to misbehave. That doesn't mean complex is the problem -- if it is valid code, the parser should have handled OR if not, then should have given a more consistent and localized error message in both cases -- when clearly, it did not.
That is indirect evidence of a bug in the program handling the parsing and producing the output.
If you throw random garbage at the compiler -- it doesn't matter, that the program was wrong. The compiler shouldn't crash, and it should determine the point at which output was unacceptable, and if it couldn't make sense of it then gracefully retired, -- but at least point to the last place things made sense, and where things went wrong.
perl 5.16 doesn't do that, if anything it reliably does not do that. That is bad design -- to the point most wouldn't consider it to be a design flaw or bug, requiring more more evidence that what we've come up this thread. To the point -- my claims of finding bugs in the compiler (supported by many bug reports some of which causes were found, and some not in past versions) would show that someone who strongly claims that my finding a bug is 'extraordinary', requiring extraordinary proof, is rather naive. I've had bugs that would reliable reduce the the perl compiler to a core dump after a random amount of runtime. I've had others that only required reading a very specific way through 4TB data files. Not something that is the subject of your every day testing.
If I say I have multiple programs that have broke between versions, it's likely true and likely not entirely my fault if at all -- or so history has shown.