Formalise the idea that, even with recursion, regexes aren't Turing complete.
I think your approach of looking at the space required to simulate recursive regexes is the most straight-forward approach.
A back-of-the-envelope calculation is as follows: the "state" that describes the regex evaluation consists of a stack. Each level in the stack corresponds to a nonterminal (i.e., regex to match at this recursive level), a position in the original string, and any captures and alternations already made by the regex. You can stop simulating this branch of the regex match every time the stack contains a repeated configuration, since if the regex will eventually match, then it must also match using a stack with no repeats. So you are only bound by the number of distinct stack configurations. Say, (# of nonterminals * length of string * 2^(# alternations in the regex) * (length of string)^(2 * # of captures)). The last term is to store a "start" and "end" pointer for each capture.
Whatever this comes out to be, it is some fixed function of the input size (I guess in the EXPSPACE family), so there is some fixed space bound.
By the Space hierarchy theorem, there are always languages that require more space.
Do we need anything more than honest-to-goodness regular expressions?
Probably not. You should be able to evaluate an SK-expression from the bottom up (instead of top-down) if it is fully parenthesized. Something like this:
i.e., always evaluate the "innermost" expression, which will be the one without parentheses. I think something like this may work though I've not tested it.
Set up a recursive substitution to recognise functional equivalence of combinators (so that it could identify ``SKK and ``SKS as functionally equivalent).
Equivalent to the halting problem in the general case. Let an SK-expression simulate a Turing machine M on input x. Is it equivalent to the SK-expression that always returns true?