From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Jan 21 09:27:07 EST 1992
Article 2884 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1897 sci.logic:828 sci.math:5835 comp.ai.philosophy:2884
Newsgroups: sci.philosophy.tech,sci.logic,sci.math,comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan18.230131.26325@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1991Dec27.184248.6939@husc3.harvard.edu> <17455.296842ba@amherst.edu> <1992Jan18.134014.7771@husc3.harvard.edu>
Date: Sat, 18 Jan 92 23:01:31 GMT
Lines: 19

In article <1992Jan18.134014.7771@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>By the same token, in order for strong AI to succeed, its proponents have
>to come up with a formal system of such complexity that we would be unable
>to reflect on its consistency.  In other words, it is not sufficient that
>all our reasoning be algorithmic; we also have to be able to discover and
>recognize the algorithm, in spite of our inability to understand it.  (On
>this, see Hilary Putnam's "Reflexive Reflections" in "Erkenntnis" circa
>1985.)  The implausibility of this situation appears quite obvious to me.

It doesn't seem at all implausible to me that we could discover such
an algorithm empirically, e.g. by investigation of the brain, but be
sufficiently unable to understand it that we could never determine whether
it was consistent by reflection on the algorithm alone.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


