|Problems? Is your data what you think it is?|
Re^2: Prevent "Out of Memory" errorby tford (Beadle)
|on Nov 23, 2009 at 22:44 UTC||Need Help??|
You may be right that the un-adulterated algorithm is exponential in memory requirements, but it's classified as O(n^3) for time complexity.
It sounds like what many people are saying is basically "make it more efficient." Well I did, by using "shared subtrees" to consolidate memory usage (which helped a lot) and also a technique called "top-down filtering" which helps it to avoid the unnecessary work that JadeNB pointed out above.
Now it is about 3 times faster and uses about 1/10 as much memory, but the problem theoretically still exists.
It just doesn't seem right that Perl would allow you to keep throwing memory at a problem until it crashes, without allowing some way to trap for it.
I'll see if I can get a super-simplified model of the grammar and parser, and show an example that we can all look at. Basically, I'm pretty sure there's no good way to tell the complexity of parsing a given input without actually parsing it first.
Until then, thanks for all the help guys!
Oh by the way, I do realize that the parser used in the perl compiler is pretty exceptional, as far as the flexibility it allows, and the speed with which it parses. However, Perl is still a programming language, and the Perl parser makes decisions to ensure that in the end, the language that it accepts is unambiguous. Plus, it's written in C.
My particular application requires that we "never pick the wrong interpretation" for any given input. That's why I think it's necessary to have it come up with all valid parses.