I don't think they are necessarily hard to read at all.

foreach my $k ( sort { $self->{state}{inputs}{expected}->{$a} <=> $ +self->{state}{inputs}{expected}->{$b} } keys %{ $self->{state}{inputs +}{expected} } ){ # do stuff }
can be a little hard on the eyes. I'm certainly not advocating that everyone use refs when working with HoH's. The main point of the post was that going deep gets slow quick and that I'm suprised that in certain cases, it isn't optimized away. Perl seems to do everything else for me so I was suprised.

On occasion, I've used hashes simply as a way to group variables.
%hash =( headers=>{data}, data=>{ "huge hash" } );
One such instance was a quick and dirty search tool. It was a HoH with a huge amount of entries. If by using a simple assignment, I can get a 15% speed up when iterating through large data sets, I think it's worth doing.

"That using a reference to an inner structure is a win in your benchmark is clear, as you don't have to redo some calculations. But you cannot do that always - you can only do that if you access the same keys repeatedly. Often, the keys used are variable, and will differ from iteration to iteration."

I'm not sure what you mean by this point. Care to clarify?


"To be civilized is to deny one's nature."
Do you mean $h{$thiskey}{$thatkey} ?, but even then, orderly traversals are a fairly common thing, so in those cases where time is an issue, I still think it's a good thing to know.

In reply to Re: Re: The Cost of Nested Referencing by shotgunefx
in thread The Cost of Nested Referencing by shotgunefx

Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":