Beefy Boxes and Bandwidth Generously Provided by pair Networks
Keep It Simple, Stupid
 
PerlMonks  

Re: Re: Re: (OT) Where is programming headed?

by tilly (Archbishop)
on Dec 16, 2001 at 17:57 UTC ( #132341=note: print w/ replies, xml ) Need Help??


in reply to Re: Re: (OT) Where is programming headed?
in thread (OT) Where is programming headed?

You seem to be garbling up your math results.

First of all Dominus referred to Gödel's first incompleteness theorem, and not the unrelated proof that the consistency of ZF implied the consistency of ZFC. An axiom system is a series of postulates. An axiom system is consistent if it doesn't prove a contraction. That theorem states, quite precisely, No consistent finite first order axiom system which includes arithmetic can prove its own consistency. Dominus' point is that to cover what most people think of as algebra you need to include arithmetic, and so any given finite axiom system will always result in questions it cannot answer. (These questions can actually be boiled down to rather concrete things like whether a given polynomial is ever 0 if you put in positive integer coefficients.)

IMO Dominus referred to that one sloppily though. After all there are a well-accepted set of axioms for an abstract algebra. But abstract algebras are not entirely what most people think of as algebra, and the actual study of abstract algebra studies structures with somewhat richer form, which gets us back to square one. There are algebra questions we want answered which need new axioms.

The result that you mangled is that ZF's consistency implies ZFC's consistency. What is ZF? ZF is a commonly accepted set of axioms for set theory which was produced by Zermelo and Frankel. They produced these axioms to rescue the idea of set theory (which is due to Cantor) from the paradoxes which had beset it. Note carefully that there is no question of showing that Cantor's set theory was consistent with choice, Cantor's set theory was self contradictory. (Else Zermelo and Frankel would not have gotten into the axiom game.)

And indeed you are right that a few decades later Cohen managed to prove the independence of choice from ZF. More precisely he proved that ZF plus the consistency of ZF did not prove choice. Or, in alternate form, if ZF is consistent, then ZF plus the assertion that choice is false is also consistent.

Neat stuff. You didn't state it correctly, and it is completely irrelevant to anything that Dominus said, but it is still neat stuff to understand.

Now back to the actual conversation.

Hmmm...you seem to have misstated the Halting problem. The Halting problem is solvable in infinite time you say? Well, what do you mean by that? I can run a simulation of a program. If it halts, it halts, if it doesn't it doesn't. Is that what you mean? If so then you have a solution that is completely backwards from what anyone in theoretical CS or logic means by the Halting problem, which is useless to boot. What I mean by "anyone" here would cover, say, the set of people who could upon request sketch a proof of both the impossibility of solving the Halting problem and Gödel's first incompleteness theorem, then proceed to explain how the two results are connected.

When using technical terms around people who actually understand those terms, it is advisable to give them the usual meaning.

And finally about all of the babble about automatic programming. First of all theorem provers don't ever produce information that wasn't a logical consequence of the input. (At least they shouldn't.) While we may not have known that output, we do not get out anything truly independent of the input. But we do get conclusions that we might not have come up with on our own.

Similarly look at compilers. While a compiler, by design, isn't supposed to invent the behaviour of the overall program, it is allowed and encouraged to find optimizations that the human didn't think of. The programmer need not understand loop unrolling. The compiler will do it for you. If the compiler can deal with specialized languages which are not Turing complete (allowing it to avoid the Halting problem), they may be able to do an incredible job of figuring out how to get done what you wanted done. The classic example of this are relational databases.

So while neither compilers and theorem provers should not produce output that is not a logical consequence of the input, both draw conclusions that humans would not have and apply that to the benefit of the human. (In fact, as much as you talk compilers down and theorem provers up, good compilers incorporate theorem provers. Hmmm...)

Now where does automatic programming fit in? Well first of all I have to say that I don't know very much about automatic programming. Virtually nothing in fact. But your track record on things that I do know something about doesn't incline me to accept your description of automatic programming.

So I went off to Google. After visiting a few sites it appears that automatic programming is the art of having computer programs that write programs from a specification. (The trick being that the specification is at least somewhat incomplete.) Now you seem to be a great fan of the idea that someone somewhere will reduce this down to a few axioms. I rather strongly doubt this. This is a problem which is going to have many different solutions. The trick isn't in making up the missing bits of the specification, it is doing it in a way that comes up with answers the human thinks are reasonable. And there isn't a simple right answer to that.

So you could have a program which actually solves the problem as stated perfectly - but whose solutions are as disliked as Microsoft's wizards. For the same reason. Because it guesses, and guesses wrongly...


Comment on Re: Re: Re: (OT) Where is programming headed?
Gödel's incompleteness theorems
by robin (Chaplain) on Dec 16, 2001 at 19:34 UTC
    Good post tilly. (++ btw). I hope I won't come across as too pedantic if I offer a small correction.

    You stated the first incompleteness theorem as:

    No consistent finite first order axiom system which includes arithmetic can prove its own consistency.
    Now, that's actually a statement of the second incompleteness theorem. Also, the word "finite" is out of place, and severely weakens the claim. Even PA (Peano Arithmetic) can't be finitely axiomatised in first-order logic, because induction has to be defined using an infinite scheme. I expect you meant "recursive" instead of "finite". (In other words, there has to be a mechanical procedure which can decide whether or not any given sentence is an axiom, but the total number of axioms might be infinite.)

    The first incompleteness theorem (as strengthened by Rosser) states:

    In any consistent recursive first-order theory which includes arithmetic, there is a sentence which can be neither proved nor disproved in the theory. I.e. the theory is incomplete.

    (Gödel's original version only applies to theories which have a somewhat artificial property which he called omega-consistency.)

    If anyone is seriously interested in logic and the theory of computation, I would strongly recommend Boolos & Jeffrey which is remarkably well-written and complete. (Substantial notes are online here.) But be warned: even though the authors have done a fantastic job of making it as easy as possible to understand, it isn't light reading.

    Update: (thanks sevensven) I should add that Douglas Hofstadter's Gödel, Escher, Bach is an inspiring, idiosyncratic book which (among other things) explains Gödel's proof of the first incompleteness theorem. At the very least it might inspire you to learn more about the subject. (It did for me!)

Re: Re: Re: Re-spect!: (OT) Where is programming headed?
by mstone (Deacon) on Dec 17, 2001 at 03:14 UTC

    couldn't resist.. ;-)


    > An axiom system is a series of postulates. An axiom system is
    > consistent if it doesn't prove a contraction. That theorem states,
    > quite precisely, No consistent finite first order axiom system which
    > includes arithmetic can prove its own consistency.

    I think we're coming at the problem from opposite directions, so you see an elephant facing left, and I see an elephant facing right.

    I agree that no finite set of axioms including arithmetic can generate a system that's simulaneously correct (every statement the basis generates is guaranteed to be true) and complete (the basis is guaranteed to generate every true statement). AFAIK, that doesn't negate either the existence or utility of systems that do generate from finite bases, though.

    I never stated a requirement of simultaneous completeness and correctness. I made a quick reference to evoke the concept of formal systems, and in retrospect, I guess I should have said 'Post systems'.


    > When using technical terms around people who actually understand
    > those terms, it is advisable to give them the usual meaning.

    Okay, let's review some vocabulary:

    An algorithm is a procedure that solves a problem. The solution it produces must be correct, and the procedure must halt in a finite amount of time.

    If an algorithm can solve a problem in finite time, the problem is decidable. If no algorithm can solve the problem in finite time, the problem is undecidable.

    A solvable problem is one that can be solved by a Turing machine. A problem that cannot be solved by a Turing machine is unsolvable. Every decidable problem is solvable. Every unsolvable problem is undecidable.

    Please note that solvability carries no requirement of finite solution time, the way decidability does. We often use the terms interchangably, but there are times when it's useful to make the distinction between problems that can be solved in infinite time, and problems that can't.

    If we say that the halting problem is solvable, it means every program that should halt will, and every program that shouldn't halt won't. That just restates our basic definition of halting, so it must be true. If we say that the halting problem is unsolvable, we admit the possiblity of programs that should halt but don't, and programs that shouldn't halt but do. That directly contradicts our definition of halting, so it must be false. If you reject solutions that take infinite time, you break the definition of halting.


    > First of all theorem provers don't ever produce information that
    > wasn't a logical consequence of the input. (At least they shouldn't.)
    > While we may not have known that output, we do not get out anything
    > truly independent of the input.

    No arguments there..


    > But we do get conclusions that we might not have come up with on our
    > own.

    Bingo! That's what I'm talking about. The interesting question is how much distance we can put between the specification and the product.

    There's a qualitative difference between a program that can extrapolate code from a minimal basis, and a Microsoft wizard. The first explores an infinite generative system, and the second walks through a finite data structure that somebody else filled and ordered. The first can expand its code base on its own, while the second will always be handcuffed to whoever fills its lookup table. The first will produce code that's correct within the limits of its basis and generative capacity, while the second is is subject to all the size -v- implicit bug problems that plague any large body of code.


    > Now you seem to be a great fan of the idea that someone somewhere
    > will reduce this down to a few axioms. I rather strongly doubt this.

    So do I. I just think it's a significant and interesting question.. as opposed to the brain-starved dribblings of a poser who should know better than to try and cut heads with his betters. ;-)


    > This is a problem which is going to have many different solutions.
    > The trick isn't in making up the missing bits of the specification,
    > it is doing it in a way that comes up with answers the human thinks
    > are reasonable. And there isn't a simple right answer to that.

    And that's why I think we'll always need programmers, no matter how much of the specification can be generated by a program. ;-)

    mike
    .

      I find your distinction between solvability and decidability somewhat interesting. I have not heard the distinction drawn that way. It has been a few years since I last looked at a book on this subject (and I gave away most of my math books about 4 years ago, so I don't even have one to look this up in right now), but I would think I would have remembered something like that.

      So I did a quick Google search. And the funny thing is that I found lots of presentations online that disagreed with your proposed usage. I found none that agreed. For a random instance this lecture states directly that the Halting problem is not solvable.

      And even when I look at your links, they say nothing about finite versus infinite. The classification they draw goes like this. A decision problem is a problem whose answers are all yes/no. A decidable problem a decision problem that is solvable. Solvable problems differ from decision problems in that solvable problems are not just limited to yes/no answers.

      The fact that infinite answers are not accepted either for definitions of decidability or solvability is underscored by this lecture. They are very explicit about the fact that solvability and decidability both require that all of the computations involved halt. In other words you need to get an answer in due time.

      Now you wouldn't just be making up your finite/infinite distinction, would you? Do you have a verifiable source you are willing to quote showing that someone else out there agrees with the usage you are claiming?

        I think we're getting caught on the difference between defining a language, and proving that a given machine accepts a langauge by inspection. The former is axiomatic, and the latter is equivalent to the halting problem.

        More fundamentals:

        A Turing machine halts if it reaches either the accept or reject state for the given input.

        If a Turing machine reaches the accept state for a given input, we say that the machine has recognized or proved that input. If a machine rejects an input, we say that the machine has disproved that input.

        A total Turing machine halts for all possible input. Intuitively, a total Turing machine breaks breaks universe into two complementary sets: 'yes' and 'no'.

        A non-total Turing machine doesn't reach the accept or reject state for some input. It breaks the universe into three sets: 'yes', 'no', and 'damned if I know'.

        A language is the set of all strings that a Turing machine recognizes.

        A language is recursive if the machine that accepts it is total. Recursive languages are provable because the machine can prove or disprove the membership of any possible input.

        A language is recursively enumerable if the machine that accepts it is not total, i.e.: does not reach either the accept or reject state for every possible input.

        Now, it's axiomatic that no Turing machine can read an infinite amount of input in a finite amount of time. If we declare all solutions that take infinite time to be indeterminate, no language that contains infinitely-long terms can be recursive. It can only be recursively enumerable.

        Thing is, if we could reject the infinite terms, it would solve the halting problem. That doesn't work, so we can't use finite solution time as the criterion to decide whether a language is provable or not. It sounds you've accidentally conflated the idea of defining a language with the idea of proving a language by inspection. Officially incorrect, but easy to do if you're moving fast and not being totally anal about checking the math.

        Now as to the halting problem:

        If a set is recursively enumerable, it is semi-decidable. It will reach the accept state for every provable input, but will either reject or loop infinitely for unprovable input.

        By that definition, we can build a lookup table that says whether any given program halts or not. We just can't do it in finite time. That same statement appears in the lecture you cited:

        It should be noted that an unsolvable problem might be partially solvable by an algorithm that makes an exhaustive search for a solution. In such a case the solution is eventually found whenever it is defined, but the search might continue forever whenever the solution is undefined.

        though I do agree that 'partially solvable' is a superior replacement for my phrase 'solvable, but only by inspection in infinite time'. It's shorter, and fits the heirarchy better.

        mike
        .

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://132341]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (11)
As of 2014-07-30 10:08 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    My favorite superfluous repetitious redundant duplicative phrase is:









    Results (230 votes), past polls