http://www.perlmonks.org?node_id=710474


in reply to OT: Mathematics for programming (again)

Ironically the proofs that professional mathematicians create are almost never truly rigorous either. The difference is that a mathematician knows how, in principle, they should be able to make it rigorous. Assuming that there are no errors. Which is often a dubious assumption.

How do I explain it? Imagine that you wrote down a detailed outline for your computer program in pseudocode, but you never tried running it. Suppose you convinced yourself that the outline was complete and unambiguous. That's kind of like a mathematical proof. Suppose two other programmers were selected read your spec closely, agreed with your assessment, thought it was interesting, and didn't find any major mistakes. That's kind of like a published mathematical proof.

Based on your experience programming, how many small, niggling problems do you think you'd encounter if you tried to turn that outline into a real program? How possible is it that you'd encounter something that would require a larger rethinking? Do you think you might encounter a fatal flaw that would make your program useless?

That's how math is. Lots of proofs have errors. It has happened (for instance to an Italian school of geometry in the late 1800s) that there have been entire areas of mathematics that collapsed under the weight of accumulated areaserrors. There are some results that nobody knows whether to believe because nobody has been able to audit the proof. (Famously the classification of finite simple groups.)

Just waving a mathematical wand and saying we have a proof doesn't mean that we're right. That's not to say that trying to construct proofs about our programs is not a worthwhile exercise. But it isn't a magic cure either.

Edit: planetscape noticed that I said areas instead of errors. I found this very ironic, but fixed it anyways.

Replies are listed 'Best First'.
Re^2: OT: Mathematics for programming (again)
by TGI (Parson) on Sep 10, 2008 at 22:48 UTC

    Is this an extension of the Curry-Howard isomorphism?

    If so, as a corollary your statement "Just waving a mathematical wand and saying we have a proof doesn't mean that we're right" is equivalent to Dominus' (in)famous statement that "You can't just make shit up and expect the computer to know what you mean".


    TGI says moo

      That isomorphism is interesting, but orthogonal to what I was trying to say. I would say that your point is part of the reason why there is such a good parallel between doing mathematics and writing programs. However the point that I was trying to make is that our customary way of doing mathematics is like programming where you only outline the program. You don't write it out completely, you don't run it through a compiler, you don't do QA on it. You just do desk checking on a program that has never actually been written and convince yourself that it would work.

      If I wanted to draw a parallel to a famous quote, I would go with Knuth's, Beware of bugs in the above code; I have only proved it correct, not tried it.

        I don't know that they are orthogonal, one follows from the other. Here's the logic as I see it:

        1. A proof is a program.
        2. A mentally validated proof is a mentally validated program.
        3. Mental validation is insufficient to verify programs are correct.
        4. Therefore mental validation is insufficient to verify proofs are correct.

        There's all kinds of work in formal methods for creating provably correct programs. It seems like you are calling for the converse: computer verifiable proofs.

        I'n not sure how you'd do this, but I think you'd need to have some notational system that encompasses all of mathematics to start. Where are Russell and Whitehead when you need them? :)

        Automated verification of proofs is such a good idea that it looks like people are working on this now.

        Your argument for automated verification of proofs is persuasive. Thanks for bringing the idea up, it has provided me with some interesting things to think about while procrastinating.


        TGI says moo

Re^2: OT: Mathematics for programming (again)
by vrk (Chaplain) on Sep 12, 2008 at 08:30 UTC

    Another example is the proof of Fermat's last theorem by Andrew Wiles. Since the proof is long and involved, it took the author himself to point out the errors, and later fix them.

    I take it you are advocating that the only good way to ensure a program is correct is to run it on a computer. But here's an inherent problem: your mental model of the semantics of the programming language you use and what is actually implemented on the computer are bound to differ. Which one is correct is subject to debate (i.e. whether the documentation -- which is the only thing consumable by humans -- or the implementation -- which is consumable by computers -- is definitive). A document may be ambiguous, yet a human reading it can still understand what is meant. A computer program cannot be unambiguous. What may happen is that you interpret the program differently from the computer.

    A further problem is the level of explicitness required. Machines do not have creative thinking, so every last little detail has to be spelled out. Humans, on the other hand, can guess, find connections, and derive on existing knowledge when interpreting a document. Excessive detail will only hamper understanding, which is directly contrary to computer programs. A computer will make all idiot mistakes you never thought anyone reviewing the program would.

    --
    say "Just Another Perl Hacker";

      I'm not advocating that the only good way to ensure a program is correct is to run it on a computer. However I am saying that actually running it on a computer tends to catch a lot of things you don't catch otherwise. Furthermore when you add in unit tests, QA, and other best practices that can only happen after the code can be run, you catch even more. Therefore a program that has only been desk checked should be viewed with suspicion.

      As for why mathematicians don't do this, you've hit on some of the reasons. Mathematicians only seek to convince other mathematicians of their results. They strive to be critical of themselves and others, but at the end of the day if you can convince other mathematicians that you have a proof, that's what matters. Therefore there is no need or desire to write proofs out in unreadable formal detail, but there is a need to clarify the central point in a way that other humans will understand.

      The only good way to ensure a program is correct is to run it on a computer.

      and viceversa (s/program/proof/g):

      The only good way to ensure a proof is correct is to run it on a computer.

      If not, what was the reason for all Turing/Godel/Church efforts to fix David Hilbert's problem question on the completeness and consistency of mathematical systems? (Hilbert's program)

      That's one of the meanings of Knuth's famous quote

      "Beware of bugs in the above code; I have only proved it correct, not tried"

      A constructive proof that uses only finite procedures (algorithms) and runs in a Turing-equivalent machine has more value than an ordinary mathematical demonstration.