![]() |
|
Pathologically Eclectic Rubbish Lister | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
In some fairly crude testing on a test file of 500,000 randomly generated lines (18MB) to fit the pattern of data you described, this seems to run about 400% quicker. 40 seconds rather 160.
You may need to adjust the regex, and you might squeeze some more out by playing with the read size and/or the hash preallocation. In my tests, I had variable results from both, but the differences were of within the bounds of run-to-run error. Especially the latter which I am not really sure how the number of buckets relates to keys. Examine what is said, not who speaks.
1) When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. 2) The only way of discovering the limits of the possible is to venture a little way past them into the impossible 3) Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke. In reply to Re: speed up one-line "sort|uniq -c" perl code
by BrowserUk
|
|