From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Thu Jan 16 17:22:23 EST 1992
Article 2783 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Searle Agrees with Strong AI?
Message-ID: <1992Jan16.182948.18737@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan16.054716.14332@oracorp.com> <1992Jan16.145637.26097@news.media.mit.edu>
Date: Thu, 16 Jan 92 18:29:48 GMT
Lines: 48

In article <1992Jan16.145637.26097@news.media.mit.edu> minsky@media.mit.edu (Marvin Minsky) writes:

>I have been watching this for a long time.  Would anyone care to
>explain to me what the various players in this game mean by
>"understanding"?  Clearly, it cannot be defined behaviorally, hence it
>must be something else, an externally undetectable attribute of an
>observed system.  Also, it appears to be an all or none thing, one
>that cannot be gradually acquired, or present to small degrees, etc.
>(And if it were, there'd be no way to demonstrate this.)

Whenever Searle says "understanding", "intentionality", and so on, I
always mentally substitute "consciousness".  What the Chinese room
really seems to be "establishing" is that the CR isn't conscious,
i.e. there "nothing it's like" to be the CR.  Now, it seems coherent
to me that something could possess understanding or intentionality
without being conscious; others disagree on this, e.g. Searle thinks
that consciousness is a necessary condition for these things.  That
dispute may well be purely terminological, but in any case is made
irrelevant by focusing instead on the key question: is the CR
conscious?

Of course consciousness isn't exactly a crystal-clear term either,
but it seems at least to be clear enough what it means in the
key sense: the possession of subjective experiences, having a
"what it's like to be".  e.g. of you're simulating the processing
that goes into seeing a red apple, either something is having
a subjective "red apple" experience or it isn't.  Seems much more
clearcut to me than "understanding".

>Well, you can detect my prejudice.  How about this: let's let Searle
>off the hook for a moment, be asking this question:
>
>	If we could build a machine that is suitably reactive, and can
>	assemble raw materials so as to make working copies of itself
>	would the resulting machine be ALIVE?
>
>In  other words, is "understanding" analogous to "living" in the old
>vitalist controversies?

I don't know about "understanding" which may well be not unlike
"living" in this respect -- but I do think there is a fact of the matter
about whether a given system possess conscious experiences in this sense;
it's not a matter that's a subject for legislation, as "life" might be.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


