From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psinntp!scylla!daryl Thu Dec 26 23:57:01 EST 1991
Article 2264 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Penrose on Man vs. Machine
Message-ID: <1991Dec19.041945.27038@oracorp.com>
Organization: ORA Corporation
Date: Thu, 19 Dec 1991 04:19:45 GMT

Mikhail Zeleny has asked that I reprint some private email I sent
to him so that it can be debated in front of the whole Net. Rather
than posting the letters verbatim, I'll try to summarize my side of
the argument, and Mikhail can do the same for his side. Does that
sound fair, Mikhail?

Penrose is a brilliant man, and since I am a physicist by training, I
am properly awed by his contributions to the field. However, having
had the pleasure of hearing Penrose speak on his book, The Emperor's
New Mind, I have come to the conclusion that what he has to offer to
the field of mathematical logic is not much. His speculations about
the relationship between quantum mechanics, gravity, and the mind are
fun reading, but the part of his book that I really think is
fundamentally mistaken is his arguments that the human mind must be
more than a computer. Unlike Searle's argument, which is too murky
either to be refuted or to be convincing, Penrose' arguments were
models of clarity, although mistaken.

I will take two examples:

I. "How to outdo an algorithm", on pages 64-66 of my edition contains
the following paragraph:

     Suppose we have some algorithm which is sometimes effective for
     telling us when a Turing machine will not stop. Turing's procedure,
     as outlined above, will explicitly exhibit a Turing machine
     calculation for which that particular algorithm is not able to 
     decide whether or not the calculation stops. However, in doing so,
     it actually enables us to see the answer in this case! The particular
     Turing machine calculation that we exhibit will indeed not stop.

A little background: The halting problem is the question of deciding,
for arbitrary numbers n and m, whether Turing machine #n will ever
halt when given input m. An algorithm H(n;m) is said to solve the
halting problem if H(n;m) = 1 if n halts on m, and H(n;m) = 0 if n
never halts on m. A well-known proof of Turing's shows that there is
no algorithm that can solve the halting problem; every algorithm H(n;m)
must either give the wrong answer on some inputs n and m, or else will
fail to give any answer (that is, H(n;m) may never halt). Penrose calls
an algorithm H(n;m) "sometimes effective" for solving the halting problem
if it always gives the right answer whenever it halts (although it may
never halt for some inputs). This is the same notion as the notion of
a "partially correct" program in computer science, so I will use that
terminology.

Penrose is saying in the paragraph above that for any partially correct
algorithm H(n;m) for solving the halting problem, we can
find numbers n and m such that
1. We can see that Turing machine n never halts on m.
2. H cannot (less anthropomorphically, H never returns an answer).
Therefore, we can beat algorithm H (by answering at least one question
that H cannot). Since we can do this for any H, then we can beat any
algorithm.

It may appear at first that Penrose has shown that no algorithm can do
as well as the human mind at solving the halting problem, and
therefore the human mind cannot be an algorithm. Penrose himself seems
to believe that he has demonstrated this. However, he has shown no
such thing. Penrose' method for finding the numbers n and m relies on
the *assumption* that algorithm H is partially correct.  If we are
given H but are *not* told that it is partially correct, the Penrose'
recipe doesn't give us any insight into an example of something that
we can do but H cannot do.

The non-algorithmic insight, if there is any, is in knowing which H's
are partially correct and which are not, and Penrose tells us nothing
that would suggest that humans are any better at this than Turing
machines.

The *strongest* conclusion that we can legitimately draw from Penrose'
argument is that we can beat any algorithm that we know to be partially
correct, so if we are algorithms then there must be some algorithms that
we don't know whether they are partially correct or not.

Does Penrose give any reasons to believe that humans are better than
algorithms in sorting out the partially correct programs from the
incorrect programs? Yes, he does, in a different section of The
Emperor's New Mind. Penrose makes the claim that the insight given by
reflection is somehow nonformal, and gives us abilities beyond
those of mere algorithms: on page 110, he says:

     Reflection principles provide the very antithesis of formalist
     reasoning.

However, the only examples of reflection that Penrose cites are
*perfectly formal*! He says:

     The insight whereby we concluded that the Godel proposition
     P_k(k) is actually a true statement in arithmetic is an example
     of a general type of procedure known to logicians as a reflection
     principle...

(The sentence P_k(k) is the famous Godel sentence for first order
arithmetic, a sentence that is true but unprovable.)

As in the case with "How to outdo an algorithm" above, Penrose glosses
over a crucial step: on the bottom of page 107 and top of 108, he mentions
in passing

     Our formal system should not be so badly constructed that it actually
     allows false propositions to be proved!

The reasoning leading to the conclusion that P_k(k) is true depends in
an essential way on this assumption that the original system is sound
(doesn't allow false propositions to be proved). The crucial,
non-algorithmic step in the reasoning is in this very assumption (it
is non-algorithmic to decide which formal systems are sound and which
are not). The reflection principle, allowing us to deduce that P_k(k)
is true, given that our original system is sound, is not the
antithesis of formalist reasoning; it is completely formal! It is even
provable in ZFC.

Therefore, in these cases, Penrose' arguments amount to the following:

    (1) Assuming that we can tell which Turing machines H are partially
        correct for solving the halting problem, then our reasoning
        is nonalgorithmic.

    (2) Assuming that we can tell which theories are sound, then our
        reasoning is nonalgorithmic.

In other words, assuming that we can do things that no machine can do,
then we can do things that no machine can do.

Daryl McCullough
ORA Corp.
Ithaca, NY.



