|laziness, impatience, and hubris|
you have to distinguish between "free memory" that is available to the Perl process and "free memory" that is available to the operating system. A simple undef will not free the memory for the OS, but that for perl only.
Let's say your box has 128MB RAM. You start some perl script . The perl process takes something between 5 to 6MB of RAM. Now you create some hash of 10MB size. Your process now takes 16MB of RAM. Now you undef the hash again. Your process still takes 16MB of ram.
Now you - in the same process - create some 5MB list/array/whatever. Your process still takes 16MB of RAM. So perl keeps always the memory that was allocated at max.
Subsequent allocations within this memory are handled by perl itself and not by the OS, thus the process doesn't grow in memory as long as you don't allocate more than was allocated at max. at any time before by the current perl process.
So what can one do against that? Under Linux or some other superior OS that has fork - yes you got it: dealocate the big array and just fork. More precisely spawn and let the parent process die.