|Think about Loose Coupling|
Re^2: Analysis of Regular Expressionsby PetaMem (Priest)
|on Mar 17, 2010 at 17:16 UTC||Need Help??|
ok, it's clear that solution 1) has exponential runtime and space requirements (size of the charset doesn't even matter - (well until JavaFan points out that a charset of 1 char would...) )
I basically went for solution 2) before, but only with some half-baked heuristic constructs, so probably a more high-level description of the problem is in place: I'm working on a chatbot demo which has to catch some phrases.
Thats a simple pattern matching just before any parsing, and delivers decent speed. And the whole point of this discussion and my endeavour is, that the set of the regular expressions gets extended and it is not feasible (at least I haven't come up with a feasible way) to sort them by "do-what-i-mean" manually.
Doing a trivial "sort by length" has actually worked very well for a set of up to 500-600 rules. But now I start getting false matches ONLY because some "more generic" regular expressions are handled before the more specific ones.
Under the assumption, that (a|b|c) is more generic than (a), it is clear, that the simple "by length" ordering will result in the wrong order, because I want to process the "specific" rules first, and the "generic" last.
What has helped:
Maybe the best solution could be to give every rule an identifier (which has happened anyway), and add some information "try this before ruleX". Which is starting to look like a parser. :-)