From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!mintaka.lcs.mit.edu!spdcc!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Tue Jan 21 09:27:10 EST 1992
Article 2889 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1899 sci.logic:829 sci.math:5842 comp.ai.philosophy:2889
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!mintaka.lcs.mit.edu!spdcc!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,sci.logic,sci.math,comp.ai.philosophy
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan19.011008.7786@husc3.harvard.edu>
Date: 19 Jan 92 06:10:07 GMT
References: <17455.296842ba@amherst.edu> <1992Jan18.134014.7771@husc3.harvard.edu> <1992Jan18.230131.26325@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 50
Nntp-Posting-Host: zariski.harvard.edu

In article <1992Jan18.230131.26325@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1992Jan18.134014.7771@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>By the same token, in order for strong AI to succeed, its proponents have
>>to come up with a formal system of such complexity that we would be unable
>>to reflect on its consistency.  In other words, it is not sufficient that
>>all our reasoning be algorithmic; we also have to be able to discover and
>>recognize the algorithm, in spite of our inability to understand it.  (On
>>this, see Hilary Putnam's "Reflexive Reflections" in "Erkenntnis" circa
>>1985.)  The implausibility of this situation appears quite obvious to me.

DC:
>It doesn't seem at all implausible to me that we could discover such
>an algorithm empirically, e.g. by investigation of the brain, but be
>sufficiently unable to understand it that we could never determine whether
>it was consistent by reflection on the algorithm alone.

Right.  Let's see how it goes: in the course of empirical observation, you
come up with a formal theory that you don't understand, without ever
passing through the intermediate stage of synthesizing it from the results
of your studies, since this process would naturally lead to understanding.
Moreover, either you have to dispense with the principle of semantic
compositionality, or admit that your theory will have a primitive
constituent part wholly incomprehensible to you, so that your lack of
understanding will not be due to its size.  Sounds like bullshit to me.

By the way, does this mean that you have finally realized that strong AI
does make epistemological claims?

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


