|Think about Loose Coupling|
Re: Evolving a faster filter?by roboticus (Canon)
|on Jan 05, 2013 at 15:31 UTC||Need Help??|
In the past, I've used a modified priority queue to hold function pointers (code was in C). As the code would run, it would keep the last "successful" function at the first entry in the queue, the rest of the queue sorted by the number of successes so far. So the functions were run in order of their success rate. It worked well, as frequently the last successful function would be successful on the next item in the list. In my case, all the functions had a similar runtime, so I didn't worry about figuring time into the equation. Also, my items to process tended to have similar items grouped together, so one function might be successful several tens of items in a row, so updating the queue wasn't particularly frequent.
Since your functions have significantly different runtimes, I think I'd try a similar procedure, but rather order by success rate, I might try success rate divided by average runtime. If the average runtime is known ahead of time, then the code is pretty straightforward. If your runtime changes, you might try rebalancing the queue every X operations, and use the time spent for X operations to modify the weights of the first few entries.
Finally, have you tried swapping your filters vs objects loops? If you have few filters and many objects, I'd guess that it might be faster to do it like:
Of course, you'd have to rearrange a good deal of code if you were going to try it this way. I recognize your handle, so I suspect you've already evaluated the tradeoff between filtering a list of objects vs. evaluating single objects. I'm mostly mentioning this trick for others with similar problems to solve.
When your only tool is a hammer, all problems look like your thumb.