Newsgroups: comp.ai.philosophy,sci.logic
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!howland.reston.ans.net!EU.net!sun4nl!cwi.nl!olaf
From: olaf@cwi.nl (Olaf Weber)
Subject: Re: Penrose and human mathematical capabilities
In-Reply-To: constab@unixg.ubc.ca's message of 13 Jul 1995 18:18:06 GMT
Message-ID: <DBp80E.IL1@cwi.nl>
Followup-To: comp.ai.philosophy,sci.logic
Sender: news@cwi.nl (The Daily Dross)
Nntp-Posting-Host: havik.cwi.nl
Organization: CWI, Amsterdam
References: <3t6tcv$nca@netnews.upenn.edu> <3tkpqr$88l@bell.maths.tcd.ie>
	<DBnFr8.CMv@cwi.nl> <3u3o0u$d7a@nnrp.ucs.ubc.ca>
Date: Fri, 14 Jul 1995 09:07:24 GMT
Lines: 89
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:30154 sci.logic:12373

In article <3u3o0u$d7a@nnrp.ucs.ubc.ca>, constab@unixg.ubc.ca (Adam Constabaris) writes:

> Of course, there was some controversy about what it *was* Penrose
> was trying to prove in TENM.  Glad to see he got "less fuzzy" in the
> sequel.

I'll try to give a better idea of what Penrose thinks he's proven, but
ultimately must recommend that you check SotM for yourself.

On page 76, Penrose arrives at the conclusion

(G)     Human mathematicians are not using a knowably sound algorithm
        in order to ascertain mathematical truth.

but he admits on page 98 that

(G*)    No individual mathematician ascertains mathematical truth
        solely by means of an algorithm that he or she knows to be
        sound.

might be a somewhat fairer formulation.

> In particular, (G) can't be used to prove anything interesting about
> "strong AI", the thesis that every human mind is functionally
> isomorphic to (or can be described as) some Turing machine or other.

The crucial argument against strong AI is contained in chapter 3.  On
page 130-1 Penrose writes:

                We must distinguish clearly between three distinct
        standpoints with regard to the knowability of a putative
        algorithmic procedure F underlying mathematical understanding,
        whether sound or not.  For F might be
(I)             consciously knowable, where its role as the actual
        algorithms underlying mathematical understanding is also
        knowable,
(II)            consciously knowable, but its role as the actual
        algorithm underlying mathematical understanding is unconscious
        and not knowable,
(III)           unconscious and not knowable.

He then argues that (I) is impossible, (II) is implausible, and spends
a lot of time showing that (III) reduces to (I).

In a sense, Penrose's admission that the best he can do against (II)
is to try to convince the reader that it is implausible grants the
moral victory to the opposition, for that possibility at least.

I'm not sure that Penrose's argument that (III) reduces to (I) holds
water.  Since the force of this ultimately depends on (I) itself being
impossible, it might not matter.

How does Penrose show that (I) is impossible?  The argument is given
on page 131, and I'll paraphrase it.  I strongly recommend reading it
yourself, and would appreciate it if any errors in my representation
were pointed out.

Let's call the algorithm A, and the related (by some mapping) formal
system F.  Penrose proceeds as follows:

(1)     Ex hypothesi, we know that A underlies our mathematical
        understanding.
(2)     Therefore, we must (mistakingly) believe that F is sound.
(3)     Since we believe the system is sound, we can construct its
        Gdel sentence G(F).
(4)     We can see the truth of G(F).
(5)     Therefore, F is incomplete or unsound.
(6)     But how can we see the truth of G(F) is F is a model of our
        mathematical understanding?
(7)     Therefore we must believe that F is unsound.
.:(8)   Step (7) contradicts (2), therefore (1) cannot hold.

Now the problematic spots that I can see are (2) and (8).

First of all I simply don't understand why I have to believe that F is
sound a priori, especially after reading the arguments in TENM and
SotM that this would be an impossible case!

Secondly, why does it follow from the contradiction, _ex hypothesi_
arrived at in an unsound system, that (1) cannot hold?

So the question returns as to why I should accept Penrose's conclusion
that strong AI is impossible, when there are _a priory_ arguments in
its favour, for example the ones given by Daniel Dennett in "Darwin's
Dangerous Idea".

Elucidation would be appreciated.

-- Olaf Weber
