From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!swrinde!gatech!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Jan 21 09:27:02 EST 1992
Article 2874 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1894 sci.logic:827 sci.math:5832 comp.ai.philosophy:2874
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!swrinde!gatech!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,sci.logic,sci.math,comp.ai.philosophy
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan18.134014.7771@husc3.harvard.edu>
Date: 18 Jan 92 18:40:13 GMT
References: <1991Dec27.051804.6985@cambridge.oracorp.com> <1991Dec27.184248.6939@husc3.harvard.edu> <17455.296842ba@amherst.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 79
Nntp-Posting-Host: zariski.harvard.edu

In article <17455.296842ba@amherst.edu> 
djvelleman@amherst.edu writes:

DV:
>  A while back, the following claim was made:

>In article <1991Dec27.184248.6939@husc3.harvard.edu>, 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>> The point is not whether human beings can solve all instances of the
>> halting problem, or tell whether an arbitrary collection of axioms is
>> consistent, but that each time they do so in any particular case, their
>> reasoning is essentially non-algorithmic, as claims Penrose.

DV:
>  The discussion has drifted away from this issue, but recently it has
>returned.  Perhaps it is appropriate therefore to respond to this claim.
>  This whole discussion started out, I think, with a discussion of Penrose's
>"Godelian argument".  For analyzing that argument, the point is precisely
>whether human beings can solve all instances of the halting problem, or tell
>whether an arbitrary collection of axioms is consistent.  The argument depends
>on this as an unstated premise; without it, the argument fails.  It may
>very well be true that our reasoning is "essentially non-algorithmic", as
>Mr. Zeleny claims (although I doubt it), but the question was not whether or
>not this is true, but rather whether or not Penrose's argument establishes
>it.  I don't think it does.

Please note that this premiss is accepted by me, but explicitly disclaimed
by Penrose in his response to the critics.  Furthermore, I find it
acceptable only on the mathematical conception of possibility, exemplified
by adopting the Peano postulates with their guarantee of a distinct
successor for each natural number.  I certainly do not wish to claim that
we are physically capable of telling whether an arbitrarily large
collection of axioms is consistent.

DV:
>  The fact that Penrose's argument depends on this additional assumption 
>has recently been restated by others, for example by David Chalmers:

DC:
>>For Penrose's Godelian argument to go through. he'd have to
>>hypothesize the ability to determine consistency or inconsistency
>>of a given formal system, and there's no reason to believe that we
>>have this ability in general

By the same token, in order for strong AI to succeed, its proponents have
to come up with a formal system of such complexity that we would be unable
to reflect on its consistency.  In other words, it is not sufficient that
all our reasoning be algorithmic; we also have to be able to discover and
recognize the algorithm, in spite of our inability to understand it.  (On
this, see Hilary Putnam's "Reflexive Reflections" in "Erkenntnis" circa
1985.)  The implausibility of this situation appears quite obvious to me.

So my premiss is not really needed to argue that strong AI is destined to
be a failure.

DV:
>  For another discussion of the same point, see the review of Penrose's book
>by Michael Barr in the December 1990 issue of the American Mathematical
>Monthly.
>
>  Dan Velleman
>  Dept. of Mathematics and Computer Science
>  Amherst College

Thanks for the reference.

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


