From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:26:27 EST 1992
Article 2807 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <6000@skye.ed.ac.uk>
Date: 16 Jan 92 18:57:45 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk> <1992Jan16.122937.23838@aifh.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 124

In article <1992Jan16.122937.23838@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>In article <5993@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

BTW, here are some other quotes from Searle, in _Minds, Brains and
Science_, BBC 1984:

 p 32: Suppose for the sake of argument that the computer's answers
       are as good as those of a native Chinese speaker.

 p 37: No doubt we will be much better able to simulate human
       behavior on computers than we can at present, and certainly
       much better than we have been able to in the past.

This doesn't prove that Searle thinks the simulation can never
be as good as a native speaker, of course, but it does suggest 
that he might be employing the possibility of such performance
only "for the sake of argument".

Back to your message ... 

>>Let's suppose that I think the behavior is not possible without
>>intentionality.  [...]  Suppose I offer Searle's argument to
>>some advocate of the Turing Test.
>>
>>You could use the very same arguments to show that _I_ am "commited
>>to the belief that the Chinese room could exist [...]
>>
>>Well, you might be able right to conclude a number of things about
>>me, but one thing you wouldn't be right to conclude is that I think
>>the behavior is possible without intentionality -- because (according
>>to the supposition) what I actually think is the opposite.
>
>Okay, so the conversation goes like this
>
>You: I believe that to hold an intelligent conversation in any human
>language requires understanding.

Please note that this is not part of offering Searle's argument.
Indeed, I may have realized that I'm not going to convince anyone in
AI that there's some conversational behavior that computers can never
produce.  That's why I like Searle's argument: it applies no matter
how good the behavior is.  And, if it works, it works even if I'm wrong
in thinking that they'll never get the required behavior.

Moreover, if Searle's argument works, it works regardless of
who offers it, and regardless of what other things that person
happens to believe.  This is something that's generally true of
arguments: they are right or not regardless of who makes them.
There is one way in which the Chinese Room argument doesn't
fit this general rule: the person in the room must be someone
who doesn't know Chinese.  But for any person who doesn't know
all languages, the appropriate adjustment is easily made.

>AI Researcher: That's interesting. I have written a program that when
>run on my computer allows a Chinese person to hold a conversation with
>the computer which they find indistinguishable from talking to another
>Chinese person. Does that mean my computer has understanding?

Of course, no one can do this today.  There is no such program,
and not much prospect of their being one in the near future.
But if such a program came along, and I was convinced by Searle's
argument that it did not have understanding, then I would have to
change my mind on the question of whether such understandingless
behavior was possible.  But that doesn't mean I have to change
my mind right now, nor that I am somehow commited to the opposite
view from the one I actually hold.

>You: Of course not. As Searle suggested, imagine I was carrying out the
>instructions that are in your program, converting Chinese symbols to
>other Chinese symbols, and thus conversing with your Chinese friend.
>Obviously I wouldn't understand a word of Chinese.
>
>AI Researcher: And the system as a whole wouldn't understand either?
>
>You: No, that's the foolish "systems" reply. How could I, plus a whole
>bunch of squiggles on paper have any more understanding of Chinese than
>I myself? If I memorised all the rules, I still wouldn't understand.
>
>AI Researcher: So though my program can converse, it can't understand?
>
>You: That's right.
>
>AI Researcher: So you don't think conversation requires understanding?
>
>You: Of course I do. Isn't that what I said before?

But, prpbably (see above) I wouldn't have said it before.  Maybe
I decided not to waste my time debating such issues, since I had
this great argument of Searle's ...

Or, I could say this:

  It turns out that I do think conversation requires understanding.
  But it now looks like I must be wrong.  Evidently there's some
  clever trick that does it.  And so it's a good thing I've been
  relying on Searle's argument all these years, because it handles
  this case.

I suspect that what you're getting at is that if I think conversation
without understanding is impossible, then I should accept the Turing
Test, because whenever there was conversation there would (in my view)
have to be understanding.  Well, if I could _show_ that conversation
was impossible without understanding, then I should indeed accept
the Turing Test.  But I can't show it's impossible, and neither can
the people who want us to accept the TT right now.

The arguements for acepting the TT right now do look rather like
residual operationalism and behaviorism.  They often involve saying
(or implying) that there's no way to test for "real understanding",
that the question of "real understanding" is meaningless or
unscientific, and so on.

Another point I think you were making before was that if Searle can
show that computers can't understand by using "syntax isn't enough for
semantics", then what does the Chinese Room add?  Well, you can think
of it as Searle having two arguments, or an argument and an example
(or, as Dennett says, an intuition pump).  Since different people may
find different arguments convincing, why not use both?  Note that
using both does not mean putting them together in one argument as
you did, I think, in <1992Jan14.151104.16978@aifh.ed.ac.uk>.

BTW, 

-- jeff


