in reply to Re: (OT) Where is programming headed? in thread (OT) Where is programming headed?
> > Programmers will be necessary until someone solves the halting
> > problem. The automatic programming problem (writing a program that
> > writes programs for you) has been proven NPcomplete.
>
> I'm not sure what you mean here, and most of the obvious
> interpretations are false. If you mean that it's impossible to
> write a program that writes programs, that's wrong, because I have
> a program here that I use every day that takes an input
> specification and writes a machine language program to implement
> that specification; it's called gcc. Programs write other programs
> all the time.
gcc doesn't 'write programs' per se, it just translates programs you've written in
one formal language (C) to another formal language (machine code).
Automatic programming boils down to the deep theoretical
question of whether all programming can be reduced to a finite set of
generative postulates, the way, say, algebra can.
If that were true, programmers would be unnecessary. A few
supercomputers with theoremproving software could sift their way
through all the permutations, spitting out every possible wellformed
program along the way.
> I doubt if there's any meaningful sense in which "the automatic
> programming problem" (whatever that is) can be shown to be
> NPcomplete. If you mean that it's equivalent to the halting problem,
> you might also want to note that the halting problem is not
> NPcomplete, and revise accordingly.
I misspoke.. the automatic programming problem is AIcomplete,
not NPcomplete. The halting problem is, indeed, impossible to solve with a Turning machine, which I believe Von Neumann classified as a 'type0' computer.
However, a Von Neumann 'type1' computer (if I've got the numbers
right), can solve the halting problem. A type1 Von Neumann
machine has an infinite tape full of ones and zeros, where the address
of each cell corresponds to a unique C program, and the value of each cell is a boolean that says whether the program halts or not. In other words, it's a giant lookup table.
Right now, we don't know if that binary sequence can be found by any
means other than bruteforce examination of every possible program.
However, if programming can be reduced to a finite set of
generative postulates, it may be possible to calculate
the value of any given bit in finite time.
It might even be possible to calculate that value with a Turing machine.. in which case, we could sidestep the current proof for the halting problem's
unsolvability without actually contradicting it.
Either way, my original assertion still stands. A solution to the halting problem would
generate a solution to automatic programming, and vice versa.
> But you're mistaken; the result of the test is guaranteed to be true.
> Certain float values can be compared exactly, and 1 is among those
> values.
Point taken, though one could probably call that a compiler issue.. IIRC, the ANSIC spec defines all floatingpoint representations in terms of a minimal error variable elim.
Please insert the code necessary to calculate the square root of 2.
mike
.
Re: (OT) Where is programming headed?
by Dominus (Parson) on Dec 15, 2001 at 06:54 UTC

Says mstone:
Automatic programming boils down to the deep theoretical question of whether all
programming can be reduced to a finite set of generative postulates, the way, say,
algebra can.
Perhaps I misunderstand your meaning, but it seems to me that
Gödel's first incompleteness theorem says exactly that algebra
cannot be reduced to a finite set of postulates.
Right now, we don't know if that binary sequence can be found by any means other than
bruteforce examination of every possible program.
You seem to have a deep misunderstanding of the halting problem.
Right now, we believe that no method, including "bruteforce examination",
can determine if every program halts. If "bruteforce examination"
were sufficient, one would write a computer program to perform the bruteforce
examination
and come up with the answer. But this is precisely what we know is impossible.
As a very simple example, consider this program:
use Math::BigInt;
my $N = Math::BigInt>new(shift @ARGV);
while (1) {
$N += 1;
next unless prime($N);
next unless prime($N+2);
print "$N\n";
exit;
}
sub prime {
my $n = shift;
for (my $i = Math::BigInt>new(2); $i * $i <= $n; $i += 1) {
return 0 if $n % $i == 0;
}
return 1;
}
You can't tell me whether this program will halt on all inputs.
Not by "brute force examination" or by any other method.
Why not? Because you don't know how to figure that out.
Neither does anyone else.
I won't address your assertion that a solution to the halting problem
will provide a solution to "the automatic programming problem",
because you still haven't said what "the automatic programming problem"
is, and I'm starting to suspect that you don't have a very clear idea of it yourself.
It might even be possible to calculate that value with a Turing
machine.. in which case, we could sidestep the current proof for the halting problem's
unsolvability without actually contradicting it.
No, you couldn't. The proof completely rules out this
exact possibility.
I don't think you could understand the proof and still believe this.
Point taken, though one could probably call that a compiler issue.. IIRC, the ANSIC
spec defines all floatingpoint representations in terms of a minimal error variable elim.
You're the one that brought it up; if you wanted to leave compilerdependent
issues out of the discussion, you should have done that.
As for the square root of 2, what of it? Are you now saying
that code to compute the square root of 2 will fail "as often as not"?
That isn't true either.
The term elim does not appear in the 1999 C language
standard, which I consulted before posting my original reply.

Mark Dominus
Perl Paraphernalia
 [reply] [d/l] 

> Perhaps I misunderstand your meaning, but it seems to me that Gödel's
> first incompleteness theorem says exactly that algebra cannot be
> reduced to a finite set of postulates.
Godel proved that the axiom of choice can't be disproved using the other axioms of Cantor set theory. Cohen followed that up by proving that the axiom of choice is independent of the remaining axioms. In other words, instead of a single concept of set theory, we have two options: one where we assume that the axiom of choice is true, and one where we assume that it's false. Cantor's postulates don't give us enough information to choose one over the other, and that makes them incomplete.
You can't apply Godel's theorem the opposite way, though. In either case, the system you choose is completely generated from its particular basis.
> If "bruteforce examination" were sufficient, one would write a
> computer program to perform the bruteforce examination and come up
> with the answer. But this is precisely what we know is impossible.
We're missing each other slightly on this one.. you're talking about a finite, algorithmic solution, and I'm talking about the infinite, theoretical one. You're saying the halting problem can't be solved by a Turing machine, and I'm saying it can, but only in infinite time. We're don't disagree, we're just saying the same thing in different ways.
By contrast.. and I'll refer back to this in a minute.. some problems can't be solved at all, even in infinite time. Random number prediction is an example. It doesn't matter what equipment we use, or how much information we gather, we'll never be able to predict the next value of a true random sequence.
> I won't address your assertion that a solution to the halting problem
> will provide a solution to "the automatic programming problem",
> because you still haven't said what "the automatic programming
> problem" is, and I'm starting to suspect that you don't have a very
> clear idea of it yourself.
Well, I certainly haven't gotten it across so far.. ;)
Let's try this: In your original response, you said that gcc "takes an input specification and writes a machine language program to implement that specification".
Now let me ask this: where does that input specification come from? Could you write a program to generate it? If so, what would you have to feed into that program?
Compilers are translators. You don't get more out of them than you put in.
Theorem provers, OTOH, can start from a set of basic axioms and discover arithmetic and higher mathematics on their own. They can even solve problems that humans haven't.. In 1933, Herbert Robbins came up with a set of axioms that he thought were a basis for Boolean algebra, but nobody could prove it. In 1996, a program called EQP crunched out the proof. Nobody gave EQP an input specification that it translated into a proof, it came up with the proof on its own. We got something out that we didn't put in.
Genetic algorithms also produce solutions to problems we can't specify, but they do it through random perturbation. In effect, they assemble jigsaw puzzles by shaking the box for a while, opening it, gluing any interlocked pieces together, then closing the box and shaking it again.
Even so, we get something out that we didn't put in.
If we put theorem provers on one side, and genetic algorithms on the other, automatic programming falls somewhere between.
If programming can be reduced to a finite set of generative postulates, automatic programming will collapse to theorem proving. We'll be able to write programs that derive other programs from the core set of postulates, and they'll eventually write code faster and better than humans can.
If programming can't be reduced to a finite set of generative postulates, it means that writing software is like trying to guess random numbers. No algorithmic solution is possible, and genetic algorithms hooked to expert systems are the closest we'll ever get to programs that generate software without requiring an input specification that completely defines the output.
> Are you now saying that code to compute the square root of 2 will fail
> "as often as not"? That isn't true either.
Uh, no.. I'm saying that computers represent floats as Nbit binary logarithms, and that logs are notorious for producing microscopic variations when you truncate them. See Question 14.4 from the C FAQ, for instance. Subtracting one root of 2 from another might give you 1e37, which is awfully close to zero, but still different enough to fail a straight equality test. That goes back to my original point regarding computers being finicky about precision.. as opposed to humans. ;)
mike
.
 [reply] 

You seem to be garbling up your math results.
First of all Dominus referred to Gödel's first
incompleteness theorem, and not the unrelated proof that
the consistency of ZF implied the consistency of ZFC. An
axiom system is a series of postulates. An axiom system
is consistent if it doesn't prove a contraction. That
theorem states, quite precisely, No consistent finite
first order axiom system which includes arithmetic can prove
its own consistency. Dominus' point is that to
cover what most people think of as algebra you need to
include arithmetic, and so any given finite axiom system
will always result in questions it cannot answer. (These
questions can actually be boiled down to rather concrete
things like whether a given polynomial is ever 0 if you put
in positive integer coefficients.)
IMO Dominus referred to that one sloppily though. After
all there are a wellaccepted set of axioms for an abstract
algebra. But abstract algebras are not entirely what most
people think of as algebra, and the actual study of abstract
algebra studies structures with somewhat richer form, which
gets us back to square one. There are algebra questions
we want answered which need new axioms.
The result that you mangled is that ZF's consistency
implies ZFC's consistency. What is ZF? ZF is a commonly
accepted set of axioms for set theory which was produced by
Zermelo and Frankel. They produced these axioms to rescue
the idea of set theory (which is due to Cantor) from the
paradoxes which had beset it. Note carefully that there is
no question of showing that Cantor's set theory was
consistent with choice, Cantor's set theory was self
contradictory. (Else Zermelo and Frankel would not have
gotten into the axiom game.)
And indeed you are right that a few decades later Cohen
managed to prove the independence of choice from ZF. More
precisely he proved that ZF plus the consistency of ZF
did not prove choice. Or, in alternate form, if ZF is
consistent, then ZF plus the assertion that choice is
false is also consistent.
Neat stuff. You didn't state it correctly, and it is
completely irrelevant to anything that Dominus said, but
it is still neat stuff to understand.
Now back to the actual conversation.
Hmmm...you seem to have misstated the Halting problem.
The Halting problem is solvable in infinite time you say?
Well, what do you mean by that? I can run a simulation
of a program. If it halts, it halts, if it doesn't it
doesn't. Is that what you mean? If so then you have a
solution that is completely backwards from what anyone in
theoretical CS or logic means by the Halting problem,
which is useless to boot. What I mean by "anyone" here
would cover, say, the set of people who could upon
request sketch a proof of both the impossibility of
solving the Halting problem and Gödel's first
incompleteness theorem, then proceed to explain how the
two results are connected.
When using technical terms around people who actually
understand those terms, it is advisable to give
them the usual meaning.
And finally about all of the babble about automatic
programming. First of all theorem provers don't ever
produce information that wasn't a logical consequence
of the input. (At least they shouldn't.) While we
may not have known that output, we do not get out
anything truly independent of the input. But we do
get conclusions that we might not have come up with on
our own.
Similarly look at compilers. While a compiler, by
design, isn't supposed to invent the behaviour of the
overall program, it is allowed and encouraged to find
optimizations that the human didn't think of. The
programmer need not understand loop unrolling. The
compiler will do it for you. If the compiler can
deal with specialized languages which are not Turing
complete (allowing it to avoid the Halting problem),
they may be able to do an incredible job of figuring
out how to get done what you wanted done. The classic
example of this are relational databases.
So while neither compilers and theorem provers should
not produce output that is not a logical consequence
of the input, both draw conclusions that humans would
not have and apply that to the benefit of the human.
(In fact, as much as you talk compilers down and
theorem provers up, good compilers incorporate theorem
provers. Hmmm...)
Now where does automatic programming fit in? Well
first of all I have to say that I don't know very much
about automatic programming. Virtually nothing in
fact. But your track record on things that I do know
something about doesn't incline me to accept your
description of automatic programming.
So I went off to Google. After visiting a few sites
it appears that automatic programming is the art of
having computer programs that write programs from a
specification. (The trick being that the
specification is at least somewhat incomplete.) Now
you seem to be a great fan of the idea that someone
somewhere will reduce this down to a few axioms. I
rather strongly doubt this. This is a problem which
is going to have many different solutions. The trick
isn't in making up the missing bits of the
specification, it is doing it in a way that comes up
with answers the human thinks are reasonable. And
there isn't a simple right answer to that.
So you could have a program which actually solves the
problem as stated perfectly  but whose solutions are
as disliked as Microsoft's wizards. For the same
reason. Because it guesses, and guesses wrongly...
 [reply] 





