In theory, a good optimizing compiler can avoid some of the stack manipulation necessary for naïve C function calls. (Obviously for those externally visible symbols where you must adhere to a platform ABI, you can't get away with this.) Even better, because the C stack isn't heap allocated, you can avoid malloc/free pairs for stack allocated values.
You get at least two drawbacks from this. First, your calling conventions are going to end up looking a lot like C's calling conventions, so you have to go through wild contortions for things like exception handling, tail call optimizations, green threads, and continuations. Second, you're subject to the limitations of the C calling conventions, so maximum stack depth (see recursion and corecursion), thread stack limits, and the safety of extension code written with access to the C stack are concerns.
You have to be a lot more clever to write optimizable code using the heap, but you get a lot of features from avoiding the C stack. To go really fast, you have to avoid copying a lot of memory around anyway, so my preference is to figure out how to optimize heap calls to avoid allocating and freeing unnecessary memory and to put everything into processor registers and keep it there.
What stops perl from using alloca, where possible, today?
Some combination of the need to box everything into SVs, non-linear control flow, and lack of escape analysis.
you want C stack allocated SVs?
It'd be nice to have something slimmer and simpler than a SV.
are you referring to RISC cpus with dozens of registers and keeping entire structs split across registers?
The AMD64 architecture (or whatever it's called) has plenty more registers than the preceding 32-bit x86 architecture, but even keeping a couple of commonly used parameters in registers throughout a function is going to be cheaper than copying them in and out of memory.
I believe that rewriting Parrot's op dispatcher in assembly to keep the program counter and interpreter in registers would offer a sizable improvement in op dispatch (not that that's anything close to Parrot's bottleneck).
Because a stack access is 1. ~70 times faster then a heap access, 2. allocation is for free, 3. you do not need to clean up the stack, and 4. stack ptrs are thread safe.
That's why normal programming languages use an ABI which puts parameters onto the stack and better align it properly to be able to use MMX.
It's not only for recursion.
Compare reading or writing at %ebp+8 against any absolute ptr. It's about 5 against 150 micro instructions.
Stack accesses are also relative and hot and local, heap accesses usually not. Some heap ptrs are hot and cached, but you still have the cache overhead.