|Perl: the Markov chain saw|
tight loop regex optimizationby superawesome (Initiate)
|on Nov 01, 2011 at 04:25 UTC||Need Help??|
superawesome has asked for the
wisdom of the Perl Monks concerning the following question:
I'm trying to speed up a few regex's in a script that gets called a few dozen times a day. Each invocation basically loops through a ton of source code and builds a sort of searchable index.
The problem is one single run of this script is now taking more than a day to run. There's some parallel-ization that can be done, but I'm hopeful there's something to be gained within each script as well.
The script in question is MXR's "genxref": here
Here's a relevant NYTProf run (one of the dozens that gets run daily, across different source repos): here. You can see some lines are getting hit a million times or more.
Here's a good example fragment:
This is one problematic snippet, but hardly the only one... the script is littered with complicated regex's. Most of them quick enough as-is, but some (like above) have become a significant performance bottleneck as our code base as grown.
How might I improve upon this situation? Specific improvements and general ideas both welcome... I know the basics from a theoretical perspective (don't capture if you don't have to, try not to backtrack, etc), but not how to spot/fix problems. I don't have enough real-world experience with this.