|laziness, impatience, and hubris|
It would be interesting to see some profiling results showing where the time is spent. For perl you can use Devel::NYTProf. I'm not sure about python.
I've profiled code with PDL in the past, and if there are many piddles being generated then the generation is a hotspot. Your profiling code regenerates the piddle each time.
This might also be relevant here: https://sourceforge.net/p/pdl/mailman/message/35067272.
Your code also includes the startup times, and PDL is a pretty heavy package that pulls in many dependencies. PDL::Lite is useful in such cases.
That said, the cross-posted question on the pdl-general mailing list has updated numbers in the thread: https://sourceforge.net/p/pdl/mailman/message/37112311. numpy is faster than PDL, but not by double. (Cross-posting is OK, but it helps if it is noted in any posts).
A last observation is that you appear to be running your code in a virtual machine. Does that have any effect on relative speed?