Just another Perl shrine | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Ostensibly, there appears to be something going on here not obvious from the code snippet you posted. A 43 million element array of 20-ish character strings should come in well below 3GB. And your Fisher-Yates implementation is in-place, so should cause no memory growth. Ie. You should be well within your machines capacity without swapping and your shuffle should be completing within seconds. Is there a better way to do this, other than the iterative slicing he's doing? There are a couple of small improvements you could make to your shuffle:
That said, the savings from the above will make little dent on the current processing time and are completely negated by your tracing code. You need to look beyond the F-Y function for the cause of its slowness. Eg. Are you consuming large amounts of data outside of that subroutine that could be pushing you into swapping? With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
In reply to Re: Very Large Arrays
by BrowserUk
|
|