Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^2: Help with speeding up regex

by BrowserUk (Patriarch)
on Aug 11, 2012 at 11:13 UTC ( [id://986890]=note: print w/replies, xml ) Need Help??


in reply to Re: Help with speeding up regex
in thread Help with speeding up regex

How does NYTProf speed up regex?


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

The start of some sanity?

Replies are listed 'Best First'.
Re^3: Help with speeding up regex
by Anonymous Monk on Aug 11, 2012 at 11:35 UTC
    He said "so that you're certain the pattern matching is where you're spending too much time? " -- is regex pattern bottleneck, yes or no? Its a good question

      Did you look at the regex? That's pretty much a given.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

      The start of some sanity?

        I did look at it. It's nasty. But Perl has made strides at improving the performance of alternation. And I don't have access to the data set. Nor do I know what the surrounding code looks like, or even if we're dealing with a modern Perl version. I can come up with data that fails so fast that the compilation of the regex dwarfs the match time. And I can think of scenarios where massive files are being slurped, most of which can be rejected quickly by the regex, in which case the act of slurping the file becomes a bigger issue than the regex itself.

        The regex is so unwieldy that I'd like to know that it is the primary bottleneck before spending time on a solution. My hunch is that by switching to a "read line by line" approach, and then breaking the pattern match into smaller chunks that can reject as early as possible in the file, the OP would be able to avoid the IO bottleneck of reading an entire file when actually the first few lines would be enough to reject a file. And if this is being repeated again and again, the savings would grow.

        But we only see one regex without any of the surrounding code, and without a good understanding of the data set. So I think it's reasonable to ask what the outcome of profiling is before diving into the big chore of breaking that regex down into more manageable components.

        My question wasn't intended to be a jab. If the OP had provided a more complete snippet of code and a sample file I would have profiled it myself out of curiosity. I was sincere enough to even spend some time fixing Devel::NYTProf on my system (and then submitting a diff for the maintainer -- It's now been fixed in release v4.08) in case the discussion led to an opportunity for me to try it out myself.


        Dave

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://986890]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (4)
As of 2024-04-25 09:55 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found