|laziness, impatience, and hubris|
Wow. Those results were very hard to read and understand.
First, none of your cases seem to be recompiling a regex each time through the loop. (It appears to me that) the worst case you've included does a string compare to determine that the regex doesn't need to be recompiled (and does this each time through the loop). Clearly, CORE::regcomp() doesn't unconditionally recompile a regex (based on parsing your results, it checks some things to determine if it even needs to do a string compare, then optionally does a string compare, and then only recompiles the regex if the string compare finds a difference).
Let's look at your results cleaned up so the interesting numbers are much easier to compare:
Second, let's take care of the least interesting bit:
We can see that the difference in speed of the regex matching is "in the noise". Indeed, I can think of no reason why the speeds would be any different in practice and suspect that the differences reported actually are indeed just noise. You might want to move the order of the cases around and re-run and see how the noise moves with the order of execution and/or just moves randomly. There might be an insignificant difference that isn't noise in one of those cases, but I won't waste time chasing that until I see better evidence of this insignificant difference in speed not being noise.
Now for the more interesting part:
We see that the first case takes about 8x longer when calling regcomp() compared to most of the others. My theory is that, since magic is involved and each time through the loop re-calls FETCH(), that a fresh copy of the read-only value is getting handed to regcomp() and so it is forced to do the string comparison. It looks to me like none of the other cases even need to compare strings.
This means that the differences between most the other cases are so very, very tiny as to be extremely unlikely to be noticed in any real-world situation. They are differences between relatively short paths through some C code. In a Perl script, such minuscule run-times will be completely dwarfed by rather mundane stuff and so won't end up adding up to anything more than a tiny fraction of a real script's over-all run time.
The m/999986/ is moderately interesting in that it demonstrates that the regex is actually compiled when the Perl code is compiled and Perl can completely avoid checking whether it needs to compile it again.
The other cases show only differences that are, again, "in the noise".
So there is no appreciable speed advantage to using /o. There are, however, significant disadvantages with regard to clarity of code and likelihood of introducing bugs.
It is unfortunate that you have shown that the use of qr// can approximately double the time taken in regcomp(). Of course, this time still adds up to a very tiny amount that is very unlikely to add up to anything that would be noticed in a real-world situation.
Let's look at the source code (p5git://pp_ctl.c.) and see why. Search for the pp_regcomp function. And there we see the extra work that is required in the case of qr// including a link to why this extra work is unfortunate and will likely go away at some point in the future: http://www.nntp.perl.org/group/perl.perl5.porters/2007/03/msg122415.html.
But, again, the slight speed penalty is very unlikely to be noticed outside of a benchmark and the benefit to code clarity and maintainability (of using qr//) makes this a very easy call for me to make for myself. I use qr//. I never use /o.
(Updated first two sentences of 2nd paragraph to not make my theory sound like something I have verified completely.)