|There's more than one way to do things|
Ok, I havent looked at the internals extremely closely but i can give you the following broad run down. Some guru no doubt will come along and paint a better picture later.
The hash implementation uses chained address hashing with a floating number of buckets that is always a power of 2. Keys are calculated and then the appropriate bits are masked off depending on the number of buckets. When the ratio of items in the hash (its more complex than buckets used, or number of items in a bucket) to buckets used is exceeds some limit (on store) the size of the bucket array is doubled and all the keys are remasked and reassigned. So often the number of buckets can be dramatically larger than the number of keys stored. (You can see this by stringifying a hash.) OTOH, when you use many hashes with long keys that are identical perl actually saves memory by only storing the keys once. All hashes used in perl share their keys, which is one of the reasons hash keys arent actually SV's.
Which shows that the array starts with 16 elements, and then doubles.
I think the question of memory optimisation versus speed optimization versus speed of development optimiztion is pretty hard to call given what you've said. If you were doing a proof of concept for an algorithm then it shouldnt matter how efficient the underlying tools are i would have thought.
Regarding your bonus question, almost certainly yes they are different to most other languages. Hashes form a central part of perl-think. A big part of how perl works is implemented via hashes. Most associative arrays ive seen have been tree or list structures. I dont think many other languages implement associative arrays internally as hash tables, but im talking out of my ass when im saying it. :-)
First they ignore you, then they laugh at you, then they fight you, then you win.