I can second the idea of using standard Unix tools. In particular awk(1) can deliver huge performance improvements over perl. We had a text-file processing app which went from 20 minutes to 20 seconds when we recoded in awk. YMMV of course.
Re^2: How to improve speed of reading big files
Replies are listed 'Best First'.
Yes there are awk guys out there. I get awk code and have to deal with it from 2 people who will remain anonymous. Their awk code is good awk code. I write Perl stuff that "stitches their awk together" and we get a good result (meaning application that works).
As far as "Perl experts" go, some folks are considerably better at Perl than others. I remain skeptical that a well coded Perl app would be of lower performance than an awk app. I will add that fewer lines of Perl doesn't necessarily mean higher performance. I will also add that the three of us have an app and process that works and we don't see any need to optimize the performance. I would argue before taking on an "optimizing" mission, a relevant questions is: does anybody care and does it matter? The main thing is: does it work?