> record their performance in previous runs and you sort them accordingly.

OK some theoretical thoughts of how to sort them:

be c[i] the average cost to run filter[i] on 1 object

be r[i] the average ratio of remaining/checked objects after applying filter[i]

be o the number of objects, c the costs so far

so after applying a filter i

o*=r[i] and c+=o*c[i]

so with an chosen ordering 0,1,2 we get

c = o*c[0]+ o*r[0]*c[1] + o*r[0]*r[1]*c[2] + ...

after restructuring we get

c = o * ( c[0] + r[0] * ( c[1] + r[1] * ( ... ) ) )

I don't know if there is a direct solution for this optimization problem (I could ask some colleagues from uni, I'm pretty sure they know) but this equation is a good base for a "branch-and-bound"¹ - graph search algorithm:

the minimal cost-factor after the first filter is > c[0] + r[0] * c[min] with c[min] being the minimal cost after excluding filter[0].

These are pretty good branch and bound criteria, which should lead very fast to an optimal solution w/o trying each faculty(@filters) permutation like tye did here.

That means even if your recorded performance for each filter is fuzzy and unstable you can always adapt and recalculate on the fly, even if you have more complicated models with ranges of values for c and r.