http://www.perlmonks.org?node_id=132341


in reply to Re: Re: (OT) Where is programming headed?
in thread (OT) Where is programming headed?

You seem to be garbling up your math results.

First of all Dominus referred to Gödel's first incompleteness theorem, and not the unrelated proof that the consistency of ZF implied the consistency of ZFC. An axiom system is a series of postulates. An axiom system is consistent if it doesn't prove a contraction. That theorem states, quite precisely, No consistent finite first order axiom system which includes arithmetic can prove its own consistency. Dominus' point is that to cover what most people think of as algebra you need to include arithmetic, and so any given finite axiom system will always result in questions it cannot answer. (These questions can actually be boiled down to rather concrete things like whether a given polynomial is ever 0 if you put in positive integer coefficients.)

IMO Dominus referred to that one sloppily though. After all there are a well-accepted set of axioms for an abstract algebra. But abstract algebras are not entirely what most people think of as algebra, and the actual study of abstract algebra studies structures with somewhat richer form, which gets us back to square one. There are algebra questions we want answered which need new axioms.

The result that you mangled is that ZF's consistency implies ZFC's consistency. What is ZF? ZF is a commonly accepted set of axioms for set theory which was produced by Zermelo and Frankel. They produced these axioms to rescue the idea of set theory (which is due to Cantor) from the paradoxes which had beset it. Note carefully that there is no question of showing that Cantor's set theory was consistent with choice, Cantor's set theory was self contradictory. (Else Zermelo and Frankel would not have gotten into the axiom game.)

And indeed you are right that a few decades later Cohen managed to prove the independence of choice from ZF. More precisely he proved that ZF plus the consistency of ZF did not prove choice. Or, in alternate form, if ZF is consistent, then ZF plus the assertion that choice is false is also consistent.

Neat stuff. You didn't state it correctly, and it is completely irrelevant to anything that Dominus said, but it is still neat stuff to understand.

Now back to the actual conversation.

Hmmm...you seem to have misstated the Halting problem. The Halting problem is solvable in infinite time you say? Well, what do you mean by that? I can run a simulation of a program. If it halts, it halts, if it doesn't it doesn't. Is that what you mean? If so then you have a solution that is completely backwards from what anyone in theoretical CS or logic means by the Halting problem, which is useless to boot. What I mean by "anyone" here would cover, say, the set of people who could upon request sketch a proof of both the impossibility of solving the Halting problem and Gödel's first incompleteness theorem, then proceed to explain how the two results are connected.

When using technical terms around people who actually understand those terms, it is advisable to give them the usual meaning.

And finally about all of the babble about automatic programming. First of all theorem provers don't ever produce information that wasn't a logical consequence of the input. (At least they shouldn't.) While we may not have known that output, we do not get out anything truly independent of the input. But we do get conclusions that we might not have come up with on our own.

Similarly look at compilers. While a compiler, by design, isn't supposed to invent the behaviour of the overall program, it is allowed and encouraged to find optimizations that the human didn't think of. The programmer need not understand loop unrolling. The compiler will do it for you. If the compiler can deal with specialized languages which are not Turing complete (allowing it to avoid the Halting problem), they may be able to do an incredible job of figuring out how to get done what you wanted done. The classic example of this are relational databases.

So while neither compilers and theorem provers should not produce output that is not a logical consequence of the input, both draw conclusions that humans would not have and apply that to the benefit of the human. (In fact, as much as you talk compilers down and theorem provers up, good compilers incorporate theorem provers. Hmmm...)

Now where does automatic programming fit in? Well first of all I have to say that I don't know very much about automatic programming. Virtually nothing in fact. But your track record on things that I do know something about doesn't incline me to accept your description of automatic programming.

So I went off to Google. After visiting a few sites it appears that automatic programming is the art of having computer programs that write programs from a specification. (The trick being that the specification is at least somewhat incomplete.) Now you seem to be a great fan of the idea that someone somewhere will reduce this down to a few axioms. I rather strongly doubt this. This is a problem which is going to have many different solutions. The trick isn't in making up the missing bits of the specification, it is doing it in a way that comes up with answers the human thinks are reasonable. And there isn't a simple right answer to that.

So you could have a program which actually solves the problem as stated perfectly - but whose solutions are as disliked as Microsoft's wizards. For the same reason. Because it guesses, and guesses wrongly...