From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aifh!bhw Tue Jan 21 09:26:25 EST 1992
Article 2804 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan16.122937.23838@aifh.ed.ac.uk>
Date: 16 Jan 92 12:29:37 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 51

In article <5993@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>Let's suppose that I think the behavior is not possible without
>intentionality.  (NB, I'm just picking intentionality as one of
>the "usual words"; I don't mean for much of importance to depend
>on that particular choice.)  Suppose I offer Searle's argument to
>some advocate of the Turing Test.
>
>You could use the very same arguments to show that _I_ am "commited to
>the belief that the Chinese room could exist, not merely proposing it
>for the sake of argument, otherwise [I have] no basis for rejecting the
>Turing test".
>
>Well, you might be able right to conclude a number of things about
>me, but one thing you wouldn't be right to conclude is that I think
>the behavior is possible without intentionality -- because (according
>to the supposition) what I actually think is the opposite.

Okay, so the conversation goes like this

You: I believe that to hold an intelligent conversation in any human
language requires understanding.

AI Researcher: That's interesting. I have written a program that when
run on my computer allows a Chinese person to hold a conversation with
the computer which they find indistinguishable from talking to another
Chinese person. Does that mean my computer has understanding?

You: Of course not. As Searle suggested, imagine I was carrying out the
instructions that are in your program, converting Chinese symbols to
other Chinese symbols, and thus conversing with your Chinese friend.
Obviously I wouldn't understand a word of Chinese.

AI Researcher: And the system as a whole wouldn't understand either?

You: No, that's the foolish "systems" reply. How could I, plus a whole
bunch of squiggles on paper have any more understanding of Chinese than
I myself? If I memorised all the rules, I still wouldn't understand.

AI Researcher: So though my program can converse, it can't understand?

You: That's right.

AI Researcher: So you don't think conversation requires understanding?

You: Of course I do. Isn't that what I said before?

AI Researcher: Oh!


BW


