in reply to Re: Re (tilly) 3: Research ideas
in thread Research ideas

First of all you are right, the regex /vr(o*)*m/ makes a naive NFA go gaga into an infinite loop. In fact I believe that Perl used to break on that exact case.

Contrary to a naive reading, this is not the same as the match vr(o*)m, in fact it gives the same result as /vro*()m/ (that is $1 winds up matching an empty string before the m).

The solution currently used is to keep track of when you start following the same logic during a zero-length match and refuse to go down the same path a second time. So before the m it will try to match another o, cannot, then tries to go around the loop and match o*, succeeds, then tries to go around the loop, detects recursion and fails back to trying to match the m, and succeeds. (Thereby finding your empty match.) A naive optimization will get this wrong.

My proposed optimization has to be done carefully, but will get it right. What I am suggesting is that you first figure out all of the possible transitions depending on what the next character is, in the order an NFA will visit them, remove the duplicates, and then toss that into a psuedo-state. In figuring out the possible transitions you need to get the zero-length recursion right. However if you do that correctly, then at no point does my optimization change what match the RE engine will find. It just allows you to "parallelize" the reasoning process and thereby eliminate a possible spot to backtrack to.

Now for those who are horribly confused, grab Mastering Regular Expressions. In a nutshell the basic problem is this. We have a regular expression. We have a string. A DFA engine (which stands for "discrete finite automata") essentially walks the string once, keeping track on each character of what all of the possible places it could be in the RE match. This gives good guaranteed performance, but it makes it hard to figure out what captured parentheses should be, or to implement backreferences (which need to keep track of more information than just where you are in the string and the RE). An NFA engine just does a recursive brute-force search of possible ways to try to match the string to the RE. This allows for lots of features (for instance the difference between .* and .*? is in whether you try to match another . first or second) but you can get into cases where there is an exponential explosion in how much work needs to be done to recognize that a string does not match. Perl, and most other scripting languages, use NFA engines. A few use DFAs.

Replies are listed 'Best First'.
Donuts and DFA's
by gryng (Hermit) on Feb 20, 2001 at 21:14 UTC
    By twisting my brain into the shape of a donut, I think I have seen a glimpse of what you mean. Let me see if I have it right:

    You propose to turn a regular expresion into a ordered set of states, one for each character position (or possibly no character as well?). This is so that you can go through each atom of each set once (that is, no backtracking, which is the bane of an NFA). This seems (-seems- to someone who has never written an NFA or a DFA before) very similar to a DFA, except that a DFA would not have the order preference that your ordered sets would have, they would just have sets.

    The consequence to that difference is that with a DFA you can go through each set in parallel, but with the proposed NFA change, you would still have to recurse backwards and forwards. This is -similar- to backtracking, but seems better, because you are going through a finite number of (ordered) sets, and not a possibly geometrically expanding number of states.

    I may have muddled the nomenculture a bit, sorry :)


      Sort of, only not so complicated. :-)

      An NFA is already compiled internally into a series of states with transitions between them. What I am proposing to do is take that representation and rewrite part of it using new states that are ordered subsets of the original.

      There are two tricks here.

      The more obvious one is summed up by the phrase, "Put off figuring out why you have done this work as long as possible." That is, if you are going to have to backtrack and then go forward with identical logic, why not make it so that you go forward following common logic as long as possible? This is next character discrimination, for instance an RE to match all of the states would be faster because if you saw an A first you would have already figured out that you just need to look at states whose name starts with A.

      The more subtle and important one is that if you are aggressive about it, you can eliminate a lot of repeated following of the same reasoning. Basically if ever you arrive at the same place in the string, with the same state information relevant to your match, at the same position in your RE, then the outcome is going to be the same. With a traditional NFA you might arrive at a state, go forward, fail, backtrack. Then you follow a different line of reasoning forward, reach the same place, and of course you will do the same work to figure out that you fail again. This is all repeated useless work, you already did this stuff.

      Alternately if it succeeds the first time, it will not backtrack and again it will be irrelevant that there is another way to the same state.

      Either way, the *second* way to the same combination is irrelevant. So in the process of optimizing if you see that you can reach the same state in multiple lines of reasoning can safely drop the second way of reaching it. By explicitly parallelizing the reasoning process that the NFA will use you can cause it to realize that 2 chains of reasoning will come back to the same state, and so the ordered sets of states can omit duplicates. And that is why you can avoid the combinatorial backtracking disaster!

      OTOH there is no requirement to optimize the whole RE in this way. In fact when you are capturing a backreference used elsewhere in the RE, then state information relevant to the global match is being captured and you explicitly cannot apply this optimization. You might still optimize bits and pieces, but you won't be able to find a simple and efficient way to match crazy things like /(.*).*\1/...

        Ok tilly, I think I understand now. Basically, one simple way to do this optimization would be to keep a list of all states you've visited so far that have failed. Not the exact state that failed, but the furthest in the past state that would definitively lead to that failed state.

        Then, before proceeding to any further state, you check to make sure your current state isn't in the list of failed states. You could also prune states that are too old to match anymore, and also optimize the search of this failed state list heavily based on your current state.

        The other thing you could do to make it more efficient is instead of going to the furthest past state, you could allow branching and store multiple fail-states for each failure that occurs.

        State machines are odd beasts :) .