Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Putnam on Penrose
Message-ID: <CzsDKp.3tu@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <HPM.94Nov20181907@cart.frc.ri.cmu.edu>
Date: Thu, 24 Nov 1994 19:17:12 GMT
Lines: 78

>...
>So far, however, what Mr. Penrose has shown - that no program that we
>can KNOW to be correct can simulate all of our mathematical
>capabilities - is quite compatible with the claim that a computer
>program could in principle successfully simulate our mathematical
>capacities.

It looks to me like (a) Putnam agrees with this conclusion:

  no program that we can KNOW to be correct can simulate all of our
  mathematical capabilities

(b) this conclusion is (roughly) the subject of Chapter 2, and 
(c) Penrose tries to deal with the compatible possibility in 
Chapter 3.

Perhaps those who have studied this more closely than I have
can tell me if I have this wrong.

>Mr. Penrose ALMOST discusses the possibility of a program that can
>capture our mathematical capabilities without our being able to
>understand it, but in fact he misses it.  

I think he discusses this (in effect) in his earlier book,
though very bfiefly and not (to my mind) very convincingly
(he says maths isn't like that, something we can't understand).

It may be that it's covered by section 3.3 of the new book as
well.  It's hard for me to tell (yet) because of terminology
differences between Putnam and Penrose.  

Penrose talks about whether algorithms are unsound, unknowable,
not knowably sound, and so forth.  Putnam's language doesn't quite
line up with Penrose's, because Putnam talks about whether we
can understand (the program?).  So it's not clear that Penrose's
way of dividing things up leaves the gap that appears in Putnam's.

>   First he describes the
>hypothetical case of a program that simulates our mathematical
>capacity and is assumed to be simple enough for us to understand it
>thoroughly.  That such a program might not be provably sound is a
>possibility that Mr. Penrose dismisses as not plausible.  He then
>considers the possibility (which he also regards as implausible) that
>the program might be so complex that we could not even write down its
>description (let alone understand it).  He rejects this possibility
>because, were it actual, the program of "strong artificial
>intelligence" - simulating our intelligence on a computer IN PRACTICE
>- could not succeed (which is irrelevant to his thesis that our
>mathematical abilities cannot IN PRINCIPLE be simulated).  

Penrose does at least discuss the shift from in-principle to
in-practice and argues that it's justified.  It's clear that 
Putnam disagrees, but it's not completely clear why.

>        But - even
>apart from the totally unjustified way this latter possibility is
>dismissed - there is an obvious lacunae: the possibility of a program
>we could write down but not succeed in understanding is overlooked!
>
>This is the mathematical fallacy on which the whole book rests.

Now this puzzles me a bit.  There are some cases, and Penrose has
overlooked one.  Suppose his conclusion follows in all the other
cases.  Don't we then get this: if we understand the program, 
Penrose is right?  Isn't that a significant conclusion in
itself?

Moreover, why can't Penrose "patch" his book by showing that
it's implausible that the algorithm could be one we couldn't
understand?  I'm not saying that would make Penrose right,
but it would seem to eliminate the particular gap identified
by Putnam.

BTW, did anyone find the same flaw as Putnam earlier?  I don't
want to go around saying "Putnam" all the time if in fact some
other people made the same point.

-- jeff
