in reply to
Re: Re: (OT) Where is programming headed?
in thread (OT) Where is programming headed?
I think I might agree with your main point, but I'd suggest that in the future
you avoid using technical jargon when you're not exactly sure what it means.
For example:
Says mstone:
Programmers will be necessary until someone solves the halting problem. The
automatic programming problem (writing a program that writes programs for you)
has been proven NPcomplete.
I'm not sure what you mean here, and most of the obvious interpretations
are false. If you mean that it's impossible to write a program
that writes programs, that's wrong, because I have a
program here that I use every day that takes an input specification and
writes a machine language program to implement that specification;
it's called gcc. Programs write other programs
all the time.
I doubt if there's any meaningful sense in which
"the automatic programming problem" (whatever that is)
can be shown to be NPcomplete.
If you mean that it's equivalent to the halting problem,
you might also want to note that the halting problem is not NPcomplete, and
revise accordingly.
You said that the C code will fail "as often as not":
float x, y;
x = 1;
y = 1;
if (x == y) {
...
}
But you're mistaken; the result of the test is guaranteed to be true.
Certain float values can be compared exactly,
and 1 is among those values.
In general,
if two floats both happen to represent rational numbers
whose denominators are both powers of 2, and if neither float's
significand exceeds the available precision, then the values
are represented exactly and can be compared exactly.
In particular, integral values less than about 2^{53}
will be represented and compared exactly.
So while I think you may have good points, the factual errors in
your note spoiled its persuasiveness for me.
Hope this helps.

Mark Dominus
Perl Paraphernalia
Re: Re: (OT) Where is programming headed? by mstone (Deacon) on Dec 15, 2001 at 03:35 UTC 
> > Programmers will be necessary until someone solves the halting
> > problem. The automatic programming problem (writing a program that
> > writes programs for you) has been proven NPcomplete.
>
> I'm not sure what you mean here, and most of the obvious
> interpretations are false. If you mean that it's impossible to
> write a program that writes programs, that's wrong, because I have
> a program here that I use every day that takes an input
> specification and writes a machine language program to implement
> that specification; it's called gcc. Programs write other programs
> all the time.
gcc doesn't 'write programs' per se, it just translates programs you've written in
one formal language (C) to another formal language (machine code).
Automatic programming boils down to the deep theoretical
question of whether all programming can be reduced to a finite set of
generative postulates, the way, say, algebra can.
If that were true, programmers would be unnecessary. A few
supercomputers with theoremproving software could sift their way
through all the permutations, spitting out every possible wellformed
program along the way.
> I doubt if there's any meaningful sense in which "the automatic
> programming problem" (whatever that is) can be shown to be
> NPcomplete. If you mean that it's equivalent to the halting problem,
> you might also want to note that the halting problem is not
> NPcomplete, and revise accordingly.
I misspoke.. the automatic programming problem is AIcomplete,
not NPcomplete. The halting problem is, indeed, impossible to solve with a Turning machine, which I believe Von Neumann classified as a 'type0' computer.
However, a Von Neumann 'type1' computer (if I've got the numbers
right), can solve the halting problem. A type1 Von Neumann
machine has an infinite tape full of ones and zeros, where the address
of each cell corresponds to a unique C program, and the value of each cell is a boolean that says whether the program halts or not. In other words, it's a giant lookup table.
Right now, we don't know if that binary sequence can be found by any
means other than bruteforce examination of every possible program.
However, if programming can be reduced to a finite set of
generative postulates, it may be possible to calculate
the value of any given bit in finite time.
It might even be possible to calculate that value with a Turing machine.. in which case, we could sidestep the current proof for the halting problem's
unsolvability without actually contradicting it.
Either way, my original assertion still stands. A solution to the halting problem would
generate a solution to automatic programming, and vice versa.
> But you're mistaken; the result of the test is guaranteed to be true.
> Certain float values can be compared exactly, and 1 is among those
> values.
Point taken, though one could probably call that a compiler issue.. IIRC, the ANSIC spec defines all floatingpoint representations in terms of a minimal error variable elim.
Please insert the code necessary to calculate the square root of 2.
mike
.
 [reply] [d/l] 

use Math::BigInt;
my $N = Math::BigInt>new(shift @ARGV);
while (1) {
$N += 1;
next unless prime($N);
next unless prime($N+2);
print "$N\n";
exit;
}
sub prime {
my $n = shift;
for (my $i = Math::BigInt>new(2); $i * $i <= $n; $i += 1) {
return 0 if $n % $i == 0;
}
return 1;
}
You can't tell me whether this program will halt on all inputs.
Not by "brute force examination" or by any other method.
Why not? Because you don't know how to figure that out.
Neither does anyone else.
I won't address your assertion that a solution to the halting problem
will provide a solution to "the automatic programming problem",
because you still haven't said what "the automatic programming problem"
is, and I'm starting to suspect that you don't have a very clear idea of it yourself.
It might even be possible to calculate that value with a Turing
machine.. in which case, we could sidestep the current proof for the halting problem's
unsolvability without actually contradicting it.
No, you couldn't. The proof completely rules out this
exact possibility.
I don't think you could understand the proof and still believe this.
Point taken, though one could probably call that a compiler issue.. IIRC, the ANSIC
spec defines all floatingpoint representations in terms of a minimal error variable elim.
You're the one that brought it up; if you wanted to leave compilerdependent
issues out of the discussion, you should have done that.
As for the square root of 2, what of it? Are you now saying
that code to compute the square root of 2 will fail "as often as not"?
That isn't true either.
The term elim does not appear in the 1999 C language
standard, which I consulted before posting my original reply.

Mark Dominus
Perl Paraphernalia
 [reply] [d/l] 

> Perhaps I misunderstand your meaning, but it seems to me that Gödel's
> first incompleteness theorem says exactly that algebra cannot be
> reduced to a finite set of postulates.
Godel proved that the axiom of choice can't be disproved using the other axioms of Cantor set theory. Cohen followed that up by proving that the axiom of choice is independent of the remaining axioms. In other words, instead of a single concept of set theory, we have two options: one where we assume that the axiom of choice is true, and one where we assume that it's false. Cantor's postulates don't give us enough information to choose one over the other, and that makes them incomplete.
You can't apply Godel's theorem the opposite way, though. In either case, the system you choose is completely generated from its particular basis.
> If "bruteforce examination" were sufficient, one would write a
> computer program to perform the bruteforce examination and come up
> with the answer. But this is precisely what we know is impossible.
We're missing each other slightly on this one.. you're talking about a finite, algorithmic solution, and I'm talking about the infinite, theoretical one. You're saying the halting problem can't be solved by a Turing machine, and I'm saying it can, but only in infinite time. We're don't disagree, we're just saying the same thing in different ways.
By contrast.. and I'll refer back to this in a minute.. some problems can't be solved at all, even in infinite time. Random number prediction is an example. It doesn't matter what equipment we use, or how much information we gather, we'll never be able to predict the next value of a true random sequence.
> I won't address your assertion that a solution to the halting problem
> will provide a solution to "the automatic programming problem",
> because you still haven't said what "the automatic programming
> problem" is, and I'm starting to suspect that you don't have a very
> clear idea of it yourself.
Well, I certainly haven't gotten it across so far.. ;)
Let's try this: In your original response, you said that gcc "takes an input specification and writes a machine language program to implement that specification".
Now let me ask this: where does that input specification come from? Could you write a program to generate it? If so, what would you have to feed into that program?
Compilers are translators. You don't get more out of them than you put in.
Theorem provers, OTOH, can start from a set of basic axioms and discover arithmetic and higher mathematics on their own. They can even solve problems that humans haven't.. In 1933, Herbert Robbins came up with a set of axioms that he thought were a basis for Boolean algebra, but nobody could prove it. In 1996, a program called EQP crunched out the proof. Nobody gave EQP an input specification that it translated into a proof, it came up with the proof on its own. We got something out that we didn't put in.
Genetic algorithms also produce solutions to problems we can't specify, but they do it through random perturbation. In effect, they assemble jigsaw puzzles by shaking the box for a while, opening it, gluing any interlocked pieces together, then closing the box and shaking it again.
Even so, we get something out that we didn't put in.
If we put theorem provers on one side, and genetic algorithms on the other, automatic programming falls somewhere between.
If programming can be reduced to a finite set of generative postulates, automatic programming will collapse to theorem proving. We'll be able to write programs that derive other programs from the core set of postulates, and they'll eventually write code faster and better than humans can.
If programming can't be reduced to a finite set of generative postulates, it means that writing software is like trying to guess random numbers. No algorithmic solution is possible, and genetic algorithms hooked to expert systems are the closest we'll ever get to programs that generate software without requiring an input specification that completely defines the output.
> Are you now saying that code to compute the square root of 2 will fail
> "as often as not"? That isn't true either.
Uh, no.. I'm saying that computers represent floats as Nbit binary logarithms, and that logs are notorious for producing microscopic variations when you truncate them. See Question 14.4 from the C FAQ, for instance. Subtracting one root of 2 from another might give you 1e37, which is awfully close to zero, but still different enough to fail a straight equality test. That goes back to my original point regarding computers being finicky about precision.. as opposed to humans. ;)
mike
.
 [reply] 



 Re: Re: (OT) Where is programming headed? by Rudif (Hermit) on Dec 15, 2001 at 14:52 UTC 
Dominus
>> integral values less than about 2^{53} will be represented and compared exactly
Since this OT is turning into a C refresher course, factual accuracy should be in order. Your statement is true of C/C++ double, which is also what Perl uses for numeric values. The example float x, y; declares a couple of singleprecision floats, which can represent exactly integral values of up to about 2^{25}. A summary of IEEE754 format can be found here and an interactive converter here.
Rudif  [reply] [d/l] [select] 

Says rudif:
Your statement is true of C/C++ double, which is also what Perl uses for numeric values.
The example float x, y;
Oops! You are absolutely right. My brain is so stuck on doubles that
I often forget about singleprecision floats.
Anyway, my main point, which was that the number 1
is exactly representable, is still correct.
Thanks for the correction.

Mark Dominus
Perl Paraphernalia
 [reply] 

