in reply to Re: (OT) Where is programming headed?

in thread (OT) Where is programming headed?

> Perhaps I misunderstand your meaning, but it seems to me that Gödel's

> first incompleteness theorem says exactly that algebra cannot be

> reduced to a finite set of postulates.

Godel proved that the axiom of choice can't be disproved using the other axioms of Cantor set theory. Cohen followed that up by proving that the axiom of choice is independent of the remaining axioms. In other words, instead of a single concept of set theory, we have two options: one where we assume that the axiom of choice is true, and one where we assume that it's false. Cantor's postulates don't give us enough information to choose one over the other, and that makes them incomplete.

You can't apply Godel's theorem the opposite way, though. In either case, the system you choose is completely generated from its particular basis.

> If "brute-force examination" were sufficient, one would write a

> computer program to perform the brute-force examination and come up

> with the answer. But this is precisely what we know is impossible.

We're missing each other slightly on this one.. you're talking about a finite, algorithmic solution, and I'm talking about the infinite, theoretical one. You're saying the halting problem can't be solved by a Turing machine, and I'm saying it can, but only in infinite time. We're don't disagree, we're just saying the same thing in different ways.

By contrast.. and I'll refer back to this in a minute.. some problems can't be solved at all, even in infinite time. Random number prediction is an example. It doesn't matter what equipment we use, or how much information we gather, we'll never be able to predict the next value of a true random sequence.

> I won't address your assertion that a solution to the halting problem

> will provide a solution to "the automatic programming problem",

> because you still haven't said what "the automatic programming

> problem" is, and I'm starting to suspect that you don't have a very

> clear idea of it yourself.

Well, I certainly haven't gotten it across so far.. ;-)

Let's try this: In your original response, you said that gcc "takes an input specification and writes a machine language program to implement that specification".

Now let me ask this: where does that input specification come from? Could you write a program to generate it? If so, what would you have to feed into that program?

Compilers are translators. You don't get more out of them than you put in.

Theorem provers, OTOH, can start from a set of basic axioms and discover arithmetic and higher mathematics on their own. They can even solve problems that humans haven't.. In 1933, Herbert Robbins came up with a set of axioms that he thought were a basis for Boolean algebra, but nobody could prove it. In 1996, a program called EQP crunched out the proof. Nobody gave EQP an input specification that it translated into a proof, it came up with the proof on its own. We got something out that we didn't put in.

Genetic algorithms also produce solutions to problems we can't specify, but they do it through random perturbation. In effect, they assemble jigsaw puzzles by shaking the box for a while, opening it, gluing any interlocked pieces together, then closing the box and shaking it again. Even so, we get something out that we didn't put in.

If we put theorem provers on one side, and genetic algorithms on the other, automatic programming falls somewhere between.

If programming can be reduced to a finite set of generative postulates, automatic programming will collapse to theorem proving. We'll be able to write programs that derive other programs from the core set of postulates, and they'll eventually write code faster and better than humans can.

If programming can't be reduced to a finite set of generative postulates, it means that writing software is like trying to guess random numbers. No algorithmic solution is possible, and genetic algorithms hooked to expert systems are the closest we'll ever get to programs that generate software without requiring an input specification that completely defines the output.

> Are you now saying that code to compute the square root of 2 will fail

> "as often as not"? That isn't true either.

Uh, no.. I'm saying that computers represent floats as N-bit binary logarithms, and that logs are notorious for producing microscopic variations when you truncate them. See Question 14.4 from the C FAQ, for instance. Subtracting one root of 2 from another might give you 1e-37, which is awfully close to zero, but still different enough to fail a straight equality test. That goes back to my original point regarding computers being finicky about precision.. as opposed to humans. ;-)

mike

.