From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl Tue Jan 28 12:15:05 EST 1992
Article 2962 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Virtual Person?
Message-ID: <1992Jan21.190750.4611@oracorp.com>
Organization: ORA Corporation
Date: Tue, 21 Jan 1992 19:07:50 GMT

Jeff Dalton writes:

> I sometimes think of Searle's argument like this:
>
> 1. If strong AI is right, then the  Chinese Room understands
>   Chinese.
>
> 2. If the Room understands Chinese, it must be because the
>   person in the room understands Chinese.
>
> 3. But the person doesn't.
>
> 4. So the room doesn't.
>
> 5. So Strong AI is wrong.

This is a very concise, understandable presentation of Searle's argument.

In this form, it is clear what the fishy points are. In my opinion,
the conjuction of 2 and 3 are suspect, because they depend on an
imprecise notion, that of a "person". If we say, for the time being,
that a mind is "that which is capable of understanding", then the
argument becomes the following:

1. If strong AI is right, then there is a mind in the Chinese Room that
understands Chinese.

2. The only mind in the Chinese Room is the one that the person brought
into the room.

3. That mind doesn't understand Chinese.

4. So the room doesn't.

5. So Strong AI is wrong.

The crucial question is then whether 2 is true. Is it possible that
following a set of rules can create a mind? Is it possible for a
person simultaneously to have more than one mind?

Searle answers the last two questions "no", and Strong AI (in
particular, the Systems Reply) answers the last two questions "yes".


Daryl McCullough
ORA Corp.
Ithaca, NY



