in reply to threads::shared seems to kill performance
Yes, shared aggregates are considerably slower than non-shared.
But try it this way and it'll be about 2/3rds less slow:
use threads; use threads::shared; my %hashOf1000SharedHashes = map{ $_ => &share({}) } 1 .. 1000; my %data:shared; foreach my $x (1..5000) { $data{$x} = shared_clone( \%hashOf1000SharedHashes ); } undef %hashOf1000SharedHashes;
That said, building a 2D HoH of empty hashes (with consecutive numerical indices?) doesn't seem very useful.
Presumably that structure will need to be populated at some point -- and with that amount of data it must becoming in from outside the program -- and once you add the IO to fetch the data into the mix, the cost of making the data shared will pale into insignificance.
If instead of building a huge, empty shared data structure, and then populating it, (which will take considerable further time), you shared and populated it in one pass, you'd save considerable time and the sharing costs would almost disappear in amongst the IO costs.
Tell us more about what goes in this monster, where that comes from; and how it is used and we'll probably be able to help you save a lot of time.
|
---|