|Perl: the Markov chain saw|
perl performance vs egrepby dba (Monk)
|on Jan 23, 2005 at 13:38 UTC||Need Help??|
dba has asked for the
wisdom of the Perl Monks concerning the following question:
Iam doing a basic pattern search of about 20GB text file. (split into 10 text files each of 2GB). I use egrep (since I have multiple patterns, using | option). It is slow. Before I try to do the same in perl to do benchmarking, I request the wisdom of esteemed monks, if you have any insight into solving similar issues. Thanks, dba
I do use 10 parallel egrep to search 2 GB each. But for a single process it takes 35 minutes. Non-perl question: Is there a way for me to tell perl or egrep to use more memory than it normally uses. I have a lot of memory on the server.
The text files are not compressed. This is my sample perl code to benchmark timings:
Any suggestions to tune this?
Thank you monks for your great suggestions. I did some benchmarking. Instead of working with 2GB file, I split the text file to the first 1 million rows (approximately 300MB) and used it to benchmark.
I also modified the regex to /^(CP|KL|KM|ME|PA|PM|SL|SZ|WX|YZ)XX1/
1. for cat to just copy, it took 5 seconds
2. for perl it took 14 seconds
3. for egrep it took 22 seconds
(Both perl and egrep used the same regex as above. I actually ran the tests twice to avoid any discrepencies)
The change in regex made a world of difference in both egrep and perl.
perl performed better than egrep to my surprise.
Thanks again for all your great suggestions. I will try demerphq's solution and update with the results.