From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Jan 21 09:27:16 EST 1992
Article 2900 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1902 sci.logic:831 sci.math:5855 comp.ai.philosophy:2900
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,sci.logic,sci.math,comp.ai.philosophy
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan19.170838.7805@husc3.harvard.edu>
Date: 19 Jan 92 22:08:37 GMT
References: <1992Jan18.230131.26325@bronze.ucs.indiana.edu> 
 <1992Jan19.011008.7786@husc3.harvard.edu> <1992Jan19.212725.10371@bronze.ucs.indiana.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 55
Nntp-Posting-Host: zariski.harvard.edu

In article <1992Jan19.212725.10371@bronze.ucs.indiana.edu> 
chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>In article <1992Jan19.011008.7786@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>Right.  Let's see how it goes: in the course of empirical observation, you
>>come up with a formal theory that you don't understand, without ever
>>passing through the intermediate stage of synthesizing it from the results
>>of your studies, since this process would naturally lead to understanding.
>>Moreover, either you have to dispense with the principle of semantic
>>compositionality, or admit that your theory will have a primitive
>>constituent part wholly incomprehensible to you, so that your lack of
>>understanding will not be due to its size.  Sounds like bullshit to me.

DC:
>Non sequitur.  The degree of understanding implied by empirical synthesis
>in no way implies the degree of understanding required to judge consistency.
>Nor does the degree of understanding implied by semantic compositionality.

If you have the means of defining a partial, or, better yet, a linear
ordering of "degrees of understanding", please share them with me;
otherwise kindly restate your argument in less obfuscatory terms.

MZ:
>>By the way, does this mean that you have finally realized that strong AI
>>does make epistemological claims?

DC:
>Not at all.  "Strong AI", i.e. the view that an appropriately programmed
>computer would think, makes no epistemological claim.  AI, more generally,
>certainly does.

So is strong AI consistent with the thesis that we are inherently incapable
of discovering such a program, as Putnam would have me believe?  This is a
far cry from strong AI as advocated by Dennett & Company.

>-- 
>Dave Chalmers                            (dave@cogsci.indiana.edu)      
>Center for Research on Concepts and Cognition, Indiana University.
>"It is not the least charm of a theory that it is refutable."


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


