From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!spool.mu.edu!uunet!mcsun!uknet!edcastle!aifh!bhw Thu Jan 16 17:21:40 EST 1992
Article 2712 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!spool.mu.edu!uunet!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan14.151104.16978@aifh.ed.ac.uk>
Date: 14 Jan 92 15:11:04 GMT
Article-I.D.: aifh.1992Jan14.151104.16978
References: <1992Jan9.185619.1336@oracorp.com> <5946@skye.ed.ac.uk>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 80

Me:
>>>>> Searle and others also seem to think that the behaviour is perfectly
>>>>> possible without such processes (without 'real' intentionality,
>>>>> consciousness, thinking).
Jeff:
>>> Searle doesn't think that.  What is the evidence for this claim?
Daryl
>>If the Chinese Room is possible, then it follows (assuming Searle is
>>correct, which I don't) that proper behavior without understanding is
>>possible.
Jeff
>That Searle is willing to postulate something in order to 
>present an argument hardly shows he thinks it's actually the
>case.  I don't see any reason to suppose Searle thinks the
>behavior is perfectly possible without intentionality.  
>Indeed, I seem to recall that he say the opposite.

So Searle's argument becomes "Imagine a person following a program that
allows them to imitate the conversation of a Chinese person. This isn't
possible [*this is the part Jeff is adding*]
because this system won't really understand Chinese. It won't 
understand Chinese because programs
only have syntax, and understanding requires semantics." Now what is the
point of the Chinese room in this argument? Searle introduced the
Chinese room to be a specific example of something that could in
principle exist, could pass the Turing test, yet wouldn't really understand
(thinking that everyone would agree it was intuitively obvious that there
was no understanding in this scenario, which they didn't). If he doesn't
think it could, even in principle, exist, then he has no argument
against the Turing test here: and the concept of the Chinese room adds
nothing whatsoever to the argument that the syntactic nature of programs
means they are incapable of supporting the semantics of minds. 

On the other hand, if he thinks it could exist, then he loses what was the
main argument that led to the rejection of behaviourism, the argument
that human behaviour _can't_ be properly explained without postulating 
mental processes as causes of that behaviour. Okay, 'mental processes'
is vague, and for some people is taken to be the kinds of processes
computers can do; for others, things like intentionality, or even free
will are taken to be necessary explanatory concepts. But if you _can_
have the behaviour without those processes, then they are no longer
_necessary_ explanatory concepts, and you need further justification for
using them in the case of humans.

Unfortunately, my previous attempt to reply to people discussing my 
"Turing test is not Behaviourist" post seems to have disappeared. One
point raised (by Bernie Simon) was that, even though understanding (he used 
'intelligence') might be a causal factor in human ability to converse,
it is a fallacy to say that therefore, anything that can converse must
have understanding. I agree it is a fallacy to say it _must_; but that
does not make it faulty reasoning to say 'the ability to converse is
very good evidence for understanding', which is as much as anyone really
wants to claim, I think. He said that "unless we assume Behaviourism,
which says that behaviour is identical with intelligence, the Turing
Test fails", but this isn't so: we need only assume that intelligent
behaviour is generally _caused_ by intelligence, so similar behaviour
probably has a similar cause.

Alan Smail pointed out that Turing himself was making no particular
assumptions about what caused the behaviour ('behaviourist' external
contingencies; 'cognitive' symbol processing; or 'mentalist' intentional
abilities) when he proposed his test. This is true, Turing instead was
defining an observable phenomenon (a computer that could converse so as
to be indistinguishable from a human) and discussing whether such a
thing was, in principle, possible. Most of the debate generated by
Searle seems to accept it is possible, but disagree over whether it
would mean that the computer has a comparable mental life to a human.
_This_ debate does require assumptions about the causal role of this
mental life for human behaviour; and my original point was that
supporting the Turing Test generally involves assuming that this mental
life does have a causal role, whereas Behaviourism assumes it has no
causal role (is of no explanatory value).

Searle's disagreement with AI might
make more sense, or at least not be such a source of confusion, if he
had actually addressed Turing's idea, and tried to demonstrate that 
there are a priori reasons why a computer couldn't converse like a
human. The 'Chinese Room' argument doesn't do this.

BW


