http://www.perlmonks.org?node_id=424382

dba has asked for the wisdom of the Perl Monks concerning the following question:

Iam doing a basic pattern search of about 20GB text file. (split into 10 text files each of 2GB). I use egrep (since I have multiple patterns, using | option). It is slow. Before I try to do the same in perl to do benchmarking, I request the wisdom of esteemed monks, if you have any insight into solving similar issues. Thanks, dba
Update1
I do use 10 parallel egrep to search 2 GB each. But for a single process it takes 35 minutes. Non-perl question: Is there a way for me to tell perl or egrep to use more memory than it normally uses. I have a lot of memory on the server.
Update2
The text files are not compressed. This is my sample perl code to benchmark timings:
use strict; open(OFILE,">output.txt") or die "Unable to open output file"; open(IFILE,"<input.txt") or die "Unable to open input file"; while ( <IFILE>) { print OFILE unless /^CPXX1|^KLXX1|^KMXX1|^MEXX1|^PAXX1|^PMXX1|^SLXX1| +^SZXX1|^WXXX1|^YZXX1/ } close(IFILE); close(OFILE);

Any suggestions to tune this?
Update3
Thank you monks for your great suggestions. I did some benchmarking. Instead of working with 2GB file, I split the text file to the first 1 million rows (approximately 300MB) and used it to benchmark.
I also modified the regex to  /^(CP|KL|KM|ME|PA|PM|SL|SZ|WX|YZ)XX1/
1. for cat to just copy, it took 5 seconds
2. for perl it took 14 seconds
3. for egrep it took 22 seconds
(Both perl and egrep used the same regex as above. I actually ran the tests twice to avoid any discrepencies)
Observations
The change in regex made a world of difference in both egrep and perl.
perl performed better than egrep to my surprise.
Update4
Thanks again for all your great suggestions. I will try demerphq's solution and update with the results.