I finally got around a few days ago to starting
reading a book I've had for a while now, Scott Aaron's book of lectures on the
foundations of mathematics, computational complexity, cryptography and quantum
mechanics, Quantum Computing Since Democritus. I started with
Chapter 3 "Gödel, Turing and Friends". One of the first things
Aaronson does in this chapter (or lecture, since it’s basically a lecture) is
prove Gödel's Incompleteness Theorems via a reproduction of Turing's proof of
the impossibility of a solution to the halting problem. It's an extremely quick
proof which I didn't know about and is cool because of how quick it is, so I'm
going to reproduce it below for fun.
First,
the proof that there's no program to solve the halting problem. This is actually
a very simple reductio proof (for
which Aaronson can’t take any credit). Suppose there is such a halting-problem-solving
program $P$ (a program which can determine for any given, arbitrary program
plus input to that program whether that arbitrary program will halt). Then, we
can modify $P$ to produce a new program $P^{\prime}$ that does the following.
Given another program $Q$ as input, $P^{\prime}$
(i)
Runs forever if Q halts, or
(ii)
Halts if Q runs forever.
(How do we know we can modify $P$ to do this? Simply because
getting a program to run forever or to stop are extremely easy problems, on
Turing machines as when programming on a PC.)
$P^{\prime}$
must be able to take any arbitrary program as input, so let $Q$ be $P^{\prime}$.
By the conditions above, $P^{\prime}$ will run forever if it halts, or halt if
it runs forever. So $P^{\prime}$ cannot exist, and so neither can $P$, by implication
(specifically, $P$ cannot exist by modus
tollens because $Exists(P) \rightarrow Exists(P^{\prime})$ and $P^{\prime}$
does not exist).
So
how does this relate to Gödel? First, we gotta remember what the first
Incompleteness Theorem states. It says that, given a consistent, computable
proof system there’s a true statement about the integers that can never be
proved from that proof system. (What is a proof system, you ask? Axioms plus
basic rules of inference). Let us use the reductio
strategy once more and suppose that this were false – that is, that there existed
a consistent, computable proof system $F$ from which any statement about
integers could either be proved or disproved. Then, given a computer program:
“We could simply search through every possible
proof in $F$ until we found either a proof that the program halts or a proof
that it doesn’t halt. This is possible because the statement that a particular
computer program halts is ultimately just a statement about integers. But this
would give us an algorithm to solve the halting problem, which we already know
is impossible. Therefore, F can’t exist.” [22]
(The
only element of rigour lacking from this proof is the unjustified claim that “the
statement that a particular computer program halts is ultimately just a statement
about integers”. Basically, what Aaronson is referring to is the fact that all
computer programs –whether they involve sorting data structures, computing the
shortest path in a $DAG$, factorising large numbers, configuring player models for
a first-person computer game or whatever – are, ‘at bottom’, i.e. at the most
primitive, machine-proximate level of
encoding, just programs to solve basic maths problems with integers. This becomes
clear in the part of your computer science degree when you analyse a C program
in terms of Assembly Language, wherein you learn that all the higher-level syntax
can be deconstructed in terms of a much simpler language which only has the
ability to tell the computer to store numbers here, retrieve numbers from there,
add or subtract two numbers, and move to this or that bit of code. Assembly
language is “flipping bits” and “flipping bits” means altering binary sequences,
i.e. essentially just changing the values of integers.
So
all high-level programming is an elaborate re-description of – i.e. abstraction
from – basic arithmetic problems (and the basic arithmetic problems are, in
turn, a representation of the voltage patterns of the transistors). This, of
course, makes it easier to write code for things that have nothing to do with
basic arithmetic problems. Meanwhile, basic arithmetic problems are, so far as
I know, all people bother to try to get Turing Machines to do, because anything
more gets super fucked up (of course, the Church-Turing thesis implies that
anything you can code in a high-level programming language could be redescribed
in such a way that you could encode the exact same set of computations on a
Turing Machine. The lambda calculus, which I learnt a few weeks ago, is perhaps
the most beautiful abstract ‘model of computation’, because it is an algebraic mathematical
system that allows one to fairly simply encode quite high-level programs, at
least with the encoding for some basic functions (like the SUCC function or
FIRST function) plus an encoding for a list. Thinking up Turing Machine designs
for even simple problems seems to be way harder.))
Anyway,
so that’s what that remark is about. It might seem still an unrigorous proof because
of this remark, but whatever.
Now
onto the second Incompleteness Theorem, which requires another detour via the
halting problem.
Now
we imagine another program $P$ which, given as input another program $Q$, tries
to decide whether $Q$ halts by searching through every possible proof and
disproof that $Q$ halts in some formal system $F$ (i.e. using the above
strategy). Then, as before, suppose we modify $P$ to produce a new program
$P^{\prime}$ that
(i)
runs forever if $Q$ given its own code as input is
proved to halt, or
(ii)
halts if Q given its own code as input is proved to
run forever.
“Now suppose we feed $P^{\prime}$ its own code as
input. Then we know that $P^{\prime}$ will run forever, without ever
discovering a proof or disproof that it halts. For if $P^{\prime}$ finds a
proof that it halts, then it will run forever, and if it finds a proof that it
runs forever, then it will halt, which is a contradiction.” [22]
But
this raises a problem, as Scott notes: “Why isn’t the above argument itself a proof that $P^{\prime}$ will
run forever given its own code as input? And why won’t $P^{\prime}$ discover
this proof that it runs forever – and therefore halt, and therefore run
forever, and therefore halt, etc.?” Fortunately, he also provides us a
solution, namely, that in “proving” that $P^{\prime}$ runs forever, we make use
of the hidden assumption that the proof
system $F$ is consistent.
And
yet this raises a further puzzle, which leads us towards the Second
Incompleteness Theorem. The above has the consequence that, “if $F$ could prove that $F$ was consistent, then $F$
could also prove that $P^{\prime}$ ran forever – thereby bringing back the
above contradiction. The only possible conclusion is that if $F$ is consistent, then $F$ can’t prove its own consistency. This
result is sometimes called Gödel’s Second Incompleteness Theorem.” [23]
Aaronson
opines that the SIT establishes “what we maybe should have expected all along”.
One way of putting the result, which he employs, is that if we want to prove
that a theory $F$ is consistent, then “we can only do it within a more powerful theory” [23]. There is
indeed something perfectly intuitive about this, to me at least.
Anyway,
we’re done now. And so now you know how to prove the Incompleteness Theorems. Wasn’t
that fun?
No comments:
Post a Comment