in reply to Re: Re: (OT) Where is programming headed? in thread (OT) Where is programming headed?
Says mstone:
Automatic programming boils down to the deep theoretical question of whether all
programming can be reduced to a finite set of generative postulates, the way, say,
algebra can.
Perhaps I misunderstand your meaning, but it seems to me that
Gödel's first incompleteness theorem says exactly that algebra
cannot be reduced to a finite set of postulates.
Right now, we don't know if that binary sequence can be found by any means other than
bruteforce examination of every possible program.
You seem to have a deep misunderstanding of the halting problem.
Right now, we believe that no method, including "bruteforce examination",
can determine if every program halts. If "bruteforce examination"
were sufficient, one would write a computer program to perform the bruteforce
examination
and come up with the answer. But this is precisely what we know is impossible.
As a very simple example, consider this program:
use Math::BigInt;
my $N = Math::BigInt>new(shift @ARGV);
while (1) {
$N += 1;
next unless prime($N);
next unless prime($N+2);
print "$N\n";
exit;
}
sub prime {
my $n = shift;
for (my $i = Math::BigInt>new(2); $i * $i <= $n; $i += 1) {
return 0 if $n % $i == 0;
}
return 1;
}
You can't tell me whether this program will halt on all inputs.
Not by "brute force examination" or by any other method.
Why not? Because you don't know how to figure that out.
Neither does anyone else.
I won't address your assertion that a solution to the halting problem
will provide a solution to "the automatic programming problem",
because you still haven't said what "the automatic programming problem"
is, and I'm starting to suspect that you don't have a very clear idea of it yourself.
It might even be possible to calculate that value with a Turing
machine.. in which case, we could sidestep the current proof for the halting problem's
unsolvability without actually contradicting it.
No, you couldn't. The proof completely rules out this
exact possibility.
I don't think you could understand the proof and still believe this.
Point taken, though one could probably call that a compiler issue.. IIRC, the ANSIC
spec defines all floatingpoint representations in terms of a minimal error variable elim.
You're the one that brought it up; if you wanted to leave compilerdependent
issues out of the discussion, you should have done that.
As for the square root of 2, what of it? Are you now saying
that code to compute the square root of 2 will fail "as often as not"?
That isn't true either.
The term elim does not appear in the 1999 C language
standard, which I consulted before posting my original reply.

Mark Dominus
Perl Paraphernalia
Re: Re: (OT) Where is programming headed? by mstone (Deacon) on Dec 16, 2001 at 13:49 UTC 
> Perhaps I misunderstand your meaning, but it seems to me that Gödel's
> first incompleteness theorem says exactly that algebra cannot be
> reduced to a finite set of postulates.
Godel proved that the axiom of choice can't be disproved using the other axioms of Cantor set theory. Cohen followed that up by proving that the axiom of choice is independent of the remaining axioms. In other words, instead of a single concept of set theory, we have two options: one where we assume that the axiom of choice is true, and one where we assume that it's false. Cantor's postulates don't give us enough information to choose one over the other, and that makes them incomplete.
You can't apply Godel's theorem the opposite way, though. In either case, the system you choose is completely generated from its particular basis.
> If "bruteforce examination" were sufficient, one would write a
> computer program to perform the bruteforce examination and come up
> with the answer. But this is precisely what we know is impossible.
We're missing each other slightly on this one.. you're talking about a finite, algorithmic solution, and I'm talking about the infinite, theoretical one. You're saying the halting problem can't be solved by a Turing machine, and I'm saying it can, but only in infinite time. We're don't disagree, we're just saying the same thing in different ways.
By contrast.. and I'll refer back to this in a minute.. some problems can't be solved at all, even in infinite time. Random number prediction is an example. It doesn't matter what equipment we use, or how much information we gather, we'll never be able to predict the next value of a true random sequence.
> I won't address your assertion that a solution to the halting problem
> will provide a solution to "the automatic programming problem",
> because you still haven't said what "the automatic programming
> problem" is, and I'm starting to suspect that you don't have a very
> clear idea of it yourself.
Well, I certainly haven't gotten it across so far.. ;)
Let's try this: In your original response, you said that gcc "takes an input specification and writes a machine language program to implement that specification".
Now let me ask this: where does that input specification come from? Could you write a program to generate it? If so, what would you have to feed into that program?
Compilers are translators. You don't get more out of them than you put in.
Theorem provers, OTOH, can start from a set of basic axioms and discover arithmetic and higher mathematics on their own. They can even solve problems that humans haven't.. In 1933, Herbert Robbins came up with a set of axioms that he thought were a basis for Boolean algebra, but nobody could prove it. In 1996, a program called EQP crunched out the proof. Nobody gave EQP an input specification that it translated into a proof, it came up with the proof on its own. We got something out that we didn't put in.
Genetic algorithms also produce solutions to problems we can't specify, but they do it through random perturbation. In effect, they assemble jigsaw puzzles by shaking the box for a while, opening it, gluing any interlocked pieces together, then closing the box and shaking it again.
Even so, we get something out that we didn't put in.
If we put theorem provers on one side, and genetic algorithms on the other, automatic programming falls somewhere between.
If programming can be reduced to a finite set of generative postulates, automatic programming will collapse to theorem proving. We'll be able to write programs that derive other programs from the core set of postulates, and they'll eventually write code faster and better than humans can.
If programming can't be reduced to a finite set of generative postulates, it means that writing software is like trying to guess random numbers. No algorithmic solution is possible, and genetic algorithms hooked to expert systems are the closest we'll ever get to programs that generate software without requiring an input specification that completely defines the output.
> Are you now saying that code to compute the square root of 2 will fail
> "as often as not"? That isn't true either.
Uh, no.. I'm saying that computers represent floats as Nbit binary logarithms, and that logs are notorious for producing microscopic variations when you truncate them. See Question 14.4 from the C FAQ, for instance. Subtracting one root of 2 from another might give you 1e37, which is awfully close to zero, but still different enough to fail a straight equality test. That goes back to my original point regarding computers being finicky about precision.. as opposed to humans. ;)
mike
.
 [reply] 

You seem to be garbling up your math results.
First of all Dominus referred to Gödel's first
incompleteness theorem, and not the unrelated proof that
the consistency of ZF implied the consistency of ZFC. An
axiom system is a series of postulates. An axiom system
is consistent if it doesn't prove a contraction. That
theorem states, quite precisely, No consistent finite
first order axiom system which includes arithmetic can prove
its own consistency. Dominus' point is that to
cover what most people think of as algebra you need to
include arithmetic, and so any given finite axiom system
will always result in questions it cannot answer. (These
questions can actually be boiled down to rather concrete
things like whether a given polynomial is ever 0 if you put
in positive integer coefficients.)
IMO Dominus referred to that one sloppily though. After
all there are a wellaccepted set of axioms for an abstract
algebra. But abstract algebras are not entirely what most
people think of as algebra, and the actual study of abstract
algebra studies structures with somewhat richer form, which
gets us back to square one. There are algebra questions
we want answered which need new axioms.
The result that you mangled is that ZF's consistency
implies ZFC's consistency. What is ZF? ZF is a commonly
accepted set of axioms for set theory which was produced by
Zermelo and Frankel. They produced these axioms to rescue
the idea of set theory (which is due to Cantor) from the
paradoxes which had beset it. Note carefully that there is
no question of showing that Cantor's set theory was
consistent with choice, Cantor's set theory was self
contradictory. (Else Zermelo and Frankel would not have
gotten into the axiom game.)
And indeed you are right that a few decades later Cohen
managed to prove the independence of choice from ZF. More
precisely he proved that ZF plus the consistency of ZF
did not prove choice. Or, in alternate form, if ZF is
consistent, then ZF plus the assertion that choice is
false is also consistent.
Neat stuff. You didn't state it correctly, and it is
completely irrelevant to anything that Dominus said, but
it is still neat stuff to understand.
Now back to the actual conversation.
Hmmm...you seem to have misstated the Halting problem.
The Halting problem is solvable in infinite time you say?
Well, what do you mean by that? I can run a simulation
of a program. If it halts, it halts, if it doesn't it
doesn't. Is that what you mean? If so then you have a
solution that is completely backwards from what anyone in
theoretical CS or logic means by the Halting problem,
which is useless to boot. What I mean by "anyone" here
would cover, say, the set of people who could upon
request sketch a proof of both the impossibility of
solving the Halting problem and Gödel's first
incompleteness theorem, then proceed to explain how the
two results are connected.
When using technical terms around people who actually
understand those terms, it is advisable to give
them the usual meaning.
And finally about all of the babble about automatic
programming. First of all theorem provers don't ever
produce information that wasn't a logical consequence
of the input. (At least they shouldn't.) While we
may not have known that output, we do not get out
anything truly independent of the input. But we do
get conclusions that we might not have come up with on
our own.
Similarly look at compilers. While a compiler, by
design, isn't supposed to invent the behaviour of the
overall program, it is allowed and encouraged to find
optimizations that the human didn't think of. The
programmer need not understand loop unrolling. The
compiler will do it for you. If the compiler can
deal with specialized languages which are not Turing
complete (allowing it to avoid the Halting problem),
they may be able to do an incredible job of figuring
out how to get done what you wanted done. The classic
example of this are relational databases.
So while neither compilers and theorem provers should
not produce output that is not a logical consequence
of the input, both draw conclusions that humans would
not have and apply that to the benefit of the human.
(In fact, as much as you talk compilers down and
theorem provers up, good compilers incorporate theorem
provers. Hmmm...)
Now where does automatic programming fit in? Well
first of all I have to say that I don't know very much
about automatic programming. Virtually nothing in
fact. But your track record on things that I do know
something about doesn't incline me to accept your
description of automatic programming.
So I went off to Google. After visiting a few sites
it appears that automatic programming is the art of
having computer programs that write programs from a
specification. (The trick being that the
specification is at least somewhat incomplete.) Now
you seem to be a great fan of the idea that someone
somewhere will reduce this down to a few axioms. I
rather strongly doubt this. This is a problem which
is going to have many different solutions. The trick
isn't in making up the missing bits of the
specification, it is doing it in a way that comes up
with answers the human thinks are reasonable. And
there isn't a simple right answer to that.
So you could have a program which actually solves the
problem as stated perfectly  but whose solutions are
as disliked as Microsoft's wizards. For the same
reason. Because it guesses, and guesses wrongly...
 [reply] 

Good post tilly. (++ btw). I hope I won't come across as too pedantic if I offer a small correction.
You stated the first incompleteness theorem as:
No consistent finite first order axiom system which includes arithmetic can prove its own consistency.
Now, that's actually a statement of the second incompleteness theorem. Also, the word "finite" is out of place, and severely weakens the claim. Even PA (Peano Arithmetic) can't be finitely axiomatised in firstorder logic, because induction has to be defined using an infinite scheme. I expect you meant "recursive" instead of "finite". (In other words, there has to be a mechanical procedure which can decide whether or not any given sentence is an axiom, but the total number of axioms might be infinite.)
The first incompleteness theorem (as strengthened by Rosser) states:
In any consistent recursive firstorder theory which includes arithmetic, there is a sentence which can be neither proved nor disproved in the theory. I.e. the theory is incomplete.
(Gödel's original version only applies to theories which have a somewhat artificial property which he called omegaconsistency.)
If anyone is seriously interested in logic and the theory of computation, I would strongly recommend Boolos & Jeffrey which is remarkably wellwritten and complete. (Substantial notes are online here.) But be warned: even though the authors have done a fantastic job of making it as easy as possible to understand, it isn't light reading.
Update: (thanks sevensven) I should add that Douglas Hofstadter's Gödel, Escher, Bach is an inspiring, idiosyncratic book which (among other things) explains Gödel's proof of the first incompleteness theorem. At the very least it might inspire you to learn more about the subject. (It did for me!)
 [reply] 

couldn't resist.. ;)
> An axiom system is a series of postulates. An axiom system is
> consistent if it doesn't prove a contraction. That theorem states,
> quite precisely, No consistent finite first order axiom system which
> includes arithmetic can prove its own consistency.
I think we're coming at the problem from opposite directions, so you see an elephant facing left, and I see an elephant facing right.
I agree that no finite set of axioms including arithmetic can generate a system that's simulaneously correct (every statement the basis generates is guaranteed to be true) and complete (the basis is guaranteed to generate every true statement). AFAIK, that doesn't negate either the existence or utility of systems that do generate from finite bases, though.
I never stated a requirement of simultaneous completeness and correctness. I made a quick reference to evoke the concept of formal systems, and in retrospect, I guess I should have said 'Post systems'.
> When using technical terms around people who actually understand
> those terms, it is advisable to give them the usual meaning.
Okay, let's review some vocabulary:
An algorithm is a procedure that solves a problem. The solution it produces must be correct, and the procedure must halt in a finite amount of time.
If an algorithm can solve a problem in finite time, the problem is decidable. If no algorithm can solve the problem in finite time, the problem is undecidable.
A solvable problem is one that can be solved by a Turing machine. A problem that cannot be solved by a Turing machine is unsolvable.
Every decidable problem is solvable. Every unsolvable problem is undecidable.
Please note that solvability carries no requirement of finite solution time, the way decidability does. We often use the terms interchangably, but there are times when it's useful to make the distinction between problems that can be solved in infinite time, and problems that can't.
If we say that the halting problem is solvable, it means every program that should halt will, and every program that shouldn't halt won't. That just restates our basic definition of halting, so it must be true. If we say that the halting problem is unsolvable, we admit the possiblity of programs that should halt but don't, and programs that shouldn't halt but do. That directly contradicts our definition of halting, so it must be false. If you reject solutions that take infinite time, you break the definition of halting.
> First of all theorem provers don't ever produce information that
> wasn't a logical consequence of the input. (At least they shouldn't.)
> While we may not have known that output, we do not get out anything
> truly independent of the input.
No arguments there..
> But we do get conclusions that we might not have come up with on our
> own.
Bingo! That's what I'm talking about. The interesting question is how much distance we can put between the specification and the product.
There's a qualitative difference between a program that can extrapolate code from a minimal basis, and a Microsoft wizard. The first explores an infinite generative system, and the second walks through a finite data structure that somebody else filled and ordered. The first can expand its code base on its own, while the second will always be handcuffed to whoever fills its lookup table. The first will produce code that's correct within the limits of its basis and generative capacity, while the second is is subject to all the size v implicit bug problems that plague any large body of code.
> Now you seem to be a great fan of the idea that someone somewhere
> will reduce this down to a few axioms. I rather strongly doubt this.
So do I. I just think it's a significant and interesting question.. as opposed to the brainstarved dribblings of a poser who should know better than to try and cut heads with his betters. ;)
> This is a problem which is going to have many different solutions.
> The trick isn't in making up the missing bits of the specification,
> it is doing it in a way that comes up with answers the human thinks
> are reasonable. And there isn't a simple right answer to that.
And that's why I think we'll always need programmers, no matter how much of the specification can be generated by a program. ;)
mike
.
 [reply] 




