Beefy Boxes and Bandwidth Generously Provided by pair Networks
Syntactic Confectionery Delight

Re: Optimizing perl code performance

by Anonymous Monk
on Aug 14, 2015 at 18:03 UTC ( #1138621=note: print w/replies, xml ) Need Help??

in reply to Optimizing perl code performance

Your sample data (provided elsewhere in this thread) reflects lines of approximately 390 bytes. Your filesize is 500MB. So you have over 1.3 million lines to process. On my system calling strftime "%M,%Y,%m,%d,%H,%j,%W,%u,%A", gmtime $Y 1.3 million times takes over 30 seconds.

If the first ten digits that comprise "$Y" are repeated many times, you could cache on that value and not have to make 1.3 million calls to gmtime, and 1.3 million calls to strftime. But you're still calling split 1.3 million times, substr 1.3 million times, and so on.

You may find it better to chunk the input file and process it with four workers, each writing its own output files. Then cat the output files together. It's possible (though not entirely certain) that with a sane number of workers each doing its share of the work this could go faster.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://1138621]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (3)
As of 2020-12-05 09:48 GMT
Find Nodes?
    Voting Booth?
    How often do you use taint mode?

    Results (63 votes). Check out past polls.