From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!van-bc!ubc-cs!unixg.ubc.ca!ramsay Thu Jan 16 17:20:11 EST 1992
Article 2689 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2689 sci.philosophy.tech:1833
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!van-bc!ubc-cs!unixg.ubc.ca!ramsay
>From: ramsay@unixg.ubc.ca (Keith Ramsay)
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan14.040820.26868@unixg.ubc.ca>
Sender: news@unixg.ubc.ca (Usenet News Maintenance)
Nntp-Posting-Host: chilko.ucs.ubc.ca
Organization: University of British Columbia, Vancouver, B.C., Canada
References: <1992Jan9.211337.14379@gpu.utcs.utoronto.ca> <5939@skye.ed.ac.uk> <1992Jan13.230532.26592@gpu.utcs.utoronto.ca>
Date: Tue, 14 Jan 1992 04:08:20 GMT

In article <1992Jan13.230532.26592@gpu.utcs.utoronto.ca> 
pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>This only shows that there is a lot of ambiguity about the term 
>('understanding') and confirms, in my opinion, a need to decide on a 
 definition
>of sorts - something which could be used as a criterion.

Getting a suitable definition is something which could, at best, only
occur toward the *end* of a discussion such as this; once one has a
definition, it would stop being a philosophical problem. So long as
we're still philosophizing, we should expect to be dealing with terms
which don't have operational definitions yet.

It is natural to be somewhat impatient with philosophical problems-
especially when it doesn't look like we're getting any closer to
solving them. But ultimately philosophy is necessary; either we'll do
it consciously, or we'll unconsciously follow philosophy which we
haven't examined for ourselves.

>I have a better analogy for you:
>You go for an exam to test your knowledge of Chinese, with a colleague who
>looks oriental. He is given a test which he answers correctly and passes. You
>are given a like test, you answer correctly but then the examiner says: 'You
>look caucasian (I assume for the sake of argument), I am not sure whether you
>'really' understand the spirit of the language'. What would you say?

The thing is, Searle is applying the criterion the other way around.
The person inside the Chinese room tells us that he *doesn't*
understand Chinese, that he wouldn't even be able to distinguish it
from Japanese or Korean, and so on. Suppose you then apply a naive
behaviorist test, and then tell the person in the room, "No, you're
wrong, you do understand Chinese; you just don't feel like you do.
What is more, you know much about Chinese geography and politics."
Surely there is something deficient in this procedure. If so, then we
have to apply our "test" with a little more sophistication.

As it happens, I think the systems reply to Searle is reasonable; the
whole system (in all likelihood) understands Chinese, although the
person carrying out the rules does not. But perhaps it is worthwhile,
then, to think a bit about what constitutes a coherent "system", while
considering how a system might understand language?
--
                       "In no way, shape or form did Kevin represent
Keith Ramsay            a viable alternative to mental illness."
ramsay@unixg.ubc.ca                  -Phillip K. Dick, _VALIS_


