Your The OP's misunderstanding is that perl, the interpreter, compiles Perl (the language) to machine code. It's not nearly that simple. Perl's compilation phase allows for the execution of Perl code. Virtually every part of the language that is available at runtime is available at compiletime as well. Furthermore, Perl's runtime is able to invoke the compilation phase, so the full power of the Perl interpreter is available to the runtime. This blurred distinction makes C++'s most vexing parse look like a fly on a camel's back. And the fact that string eval is given full access to the symbol table, as well as current lexical scopes, makes it so that even perl, the interpreter, doesn't always know what's going to happen until it happens.
To pre-compile Perl, one needs to either solve the Halting Problem, or one needs to be satisfied with being able to compile only those parts of Perl that don't rise to that level of difficulty. This strategy is known as avoiding the halting problem. In other words, you need to create a compiler that compiles the subset of Perl that can be compiled. That's not quite Perl, though. It's along the lines of what Will_the_Chill has been working on; defining a subset of Perl that can be compiled, and energizing the community to work on a compiler does only that, while integrating as seamlessly as possible with code that cannot be pre-compiled.
This MJD talk doesn't directly address the issue of a Perl compiler, but discusses why a Perl-to-C translator would essentially amount to reimplementing Perl, which brings you back to square one.
I'm not saying that such projects are pointless. I'm saying that until the person defining the scope of the project understands the limitations, a lot of time will be wasted chasing a spec that is impossible to reach. That causes people to lose respect or interest in a project before it gets too far along, which is a shame. It's best to know what the limitations are, create a compiler that does everything possible within the confines of those limitations, and then start innovating to push the envelope outward. You never will reach 100% of Perl, but you might reach a very useful majority. Unfortunately, a useful majority is useless to you in practical terms if it is unable to deal with the modules your code requires to run, so you have to be very clear in documenting what cannot be done, and to get anyone to use it, you'll have to be proactive in also identifying what common modules can be used.
C++ actually sort of did the first part of that. There's old C++89 or whatever it was. It didn't have templates, and consequently lacked the capability of implementing most of what people refer to as the STL. Then someone figured out how to push the envelope a little by adding templates. The 1998 "standard" added features to the compiler that facilitated template metaprogramming. Language features that seemed impossible a few years earlier started to become possible (such as the STL, among other things). Over the years templates have become even more powerful. C++11 found additional areas where the envelope could be pushed outward by improving the compiler. But none of the problems solved by templates or C++11 attempted to solve the halting problem. They just found ways that hadn't been previously considered to do things that are extremely useful, though they still fall short of solving the halting problem. And look how long it's taken to get there... how many man-hours, etc. Yet C++ still isn't able to give full language semantics to the compiletime phase, nor full compiletime semantics to the runtime.
Parsing and compiling 100% of Perl rises to that level. Parsing and compiling a well chosen subset doesn't. But isn't as useful, and isn't full Perl. Finding ways to get closer is a big job, and with enough work progress can be made. But you're not going to parse and compile 100% of Perl.
Your The OP's question earlier was "what's the big deal." My post was to simply explain that it sort of is a big deal. We would all love for someone in some corner of the Computer Science world to solve the halting problem, because that solution would generalize to being able to solve all other equivalent problems (including, we suppose, parsing and compiling Perl). But Turing's proof has stood up for almost seventy years.
Update: Some additional interesting reading I found, which is loosely related: Perl contains the Lambda Calculus (MJD), and Wikipedia on Lambda Calculus # Anonymous Functions: "Compiled languages, such as C++, permit run-time selection (i.e., binding) of a variety of already-compiled functions, which is utilized in lieu of compiling additional machine-code instructions, which is what would be needed by compiled languages to achieve full-fledged run-time creation of new functions." Perl supports full run-time creation of new functions. Even C++11's "lambda functions" are, behind the scenes, templated classes with an overloaded () operator. There's no provision in that language for runtime creation and compilation of code that can be executed natively within the same runtime environment as the creator. I would love to have more than my limited understanding of the Lambda Calculus, and if I did, I could probably draw parallels between the Undecidability of Equivalence problem, and some of how Perl works. And I might be able to back up the assertion that due to Perl's sloppy (but useful) distinction between what constitutes runtime and compiletime, the reducibility of Perl code falls into the "all bets are off" category. ;)