He said "so that you're certain the pattern matching is where you're spending too much time? " -- is regex pattern bottleneck, yes or no? Its a good question | [reply] |
| [reply] |
I did look at it. It's nasty. But Perl has made strides at improving the performance of alternation. And I don't have access to the data set. Nor do I know what the surrounding code looks like, or even if we're dealing with a modern Perl version. I can come up with data that fails so fast that the compilation of the regex dwarfs the match time. And I can think of scenarios where massive files are being slurped, most of which can be rejected quickly by the regex, in which case the act of slurping the file becomes a bigger issue than the regex itself.
The regex is so unwieldy that I'd like to know that it is the primary bottleneck before spending time on a solution. My hunch is that by switching to a "read line by line" approach, and then breaking the pattern match into smaller chunks that can reject as early as possible in the file, the OP would be able to avoid the IO bottleneck of reading an entire file when actually the first few lines would be enough to reject a file. And if this is being repeated again and again, the savings would grow.
But we only see one regex without any of the surrounding code, and without a good understanding of the data set. So I think it's reasonable to ask what the outcome of profiling is before diving into the big chore of breaking that regex down into more manageable components.
My question wasn't intended to be a jab. If the OP had provided a more complete snippet of code and a sample file I would have profiled it myself out of curiosity. I was sincere enough to even spend some time fixing Devel::NYTProf on my system (and then submitting a diff for the maintainer -- It's now been fixed in release v4.08) in case the discussion led to an opportunity for me to try it out myself.
| [reply] |