From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl Thu Jan  9 10:34:09 EST 1992
Article 2560 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Causes and Reasons
Message-ID: <1992Jan8.181617.24084@oracorp.com>
Organization: ORA Corporation
Date: Wed, 8 Jan 1992 18:16:17 GMT

Jeff Dalton writes:

> One of the clever ideas behind the Chinese Room is to use
> understanding Chinese rather than "tapping into some a priori
> semantic reality" as the thing we're looking for.

I don't personally believe that there is anything especially clever
about Searle's Chinese Room argument, but everyone is entitled to his
or her own opinion.

Anyway, when Searle says that the Chinese Room doesn't understand that
the Chinese symbol for "hamburger" refers to the real-world object
hamburger, he is appealing to an unsubstantiated difference between
human understanding and artificial understanding. How does *our* word
"hamburger" really refer to honest-to-God hamburgers? What does it
mean to say that our word really refers to real hamburgers, other than
to say that we can somehow our words "tap into" semantic reality?

> Some people understand Chinese. What Searle claims to have
> shown is that computers can't understand Chinese *in the very
> same sense of understand*.

And Searle's opponents claim that he hasn't shown it.

There is some confusion as to what exactly Searle thinks he is doing.
If he is trying to show that Strong AI has not been proved, I would
certainly agree--it is a working hypothesis. If he is trying to show
that Strong AI is definitely false, then his argument fails, and the
systems reply shows it. 

Searle refers to the systems reply as "begging the question". That is
nonsense. The question, as I see it, is not whether there is
understanding in the Chinese Room, but whether Searle has proved that
there isn't. If Searle claims that he can show that Strong AI is
nonsense, he has to show that assuming Strong AI leads to a
contradiction (or at least to an absurdity), and he hasn't done so. By
claiming that the systems reply is begging the question, Searle is
essentially saying "Only someone who already believes in Strong AI
would believe in the systems reply". So what? If Searle believes he
can show that Strong AI is nonsense, then he can certainly show that
Strong AI plus the Systems Reply is nonsense. However, it is circular
reasoning on the part of Searle if his argument is:

     1. Strong AI is nonsense, because the Systems Reply is nonsense.
     2. The Systems Reply is nonsense because it depends on Strong AI,
        which is nonsense.

Daryl McCullough
ORA Corp.
301A Harris B. Dates Dr.
Ithaca, NY 14850-1313






