From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:21:44 EST 1992
Article 2718 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <5979@skye.ed.ac.uk>
Date: 14 Jan 92 21:51:19 GMT
References: <1992Jan9.185619.1336@oracorp.com> <5946@skye.ed.ac.uk> <1992Jan14.151104.16978@aifh.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 207

In article <1992Jan14.151104.16978@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>Me:
>>>>>> Searle and others also seem to think that the behaviour is perfectly
>>>>>> possible without such processes (without 'real' intentionality,
>>>>>> consciousness, thinking).
>Jeff:
>>>> Searle doesn't think that.  What is the evidence for this claim?
>Daryl
>>>If the Chinese Room is possible, then it follows (assuming Searle is
>>>correct, which I don't) that proper behavior without understanding is
>>>possible.

That's right.  If the CR is possible, then it follows (assuming
Searle is right) that behavior without understanding is possible.

Now, the CR is meant to stand for a computer running a program.  
It can be argued that it isn't actually a good representative --
that it maters, for example, that the Room would be extremely slow.
Nonetheless, that's the idea: here's this computer, it's running
the right program, it has the right behavior, and yet (contrary
to Strong AI) it doesn't understand.

Think of it like this: even if a computer running a program
could get the right behavior, it still wouldn't understand.
That's what the CR supposedly shows.

Note that nothing here requires that Searle think that a
computer running a program could have the right behavior.
Searle could make a different argument, one designed to
show that the behavior wouldn't be possible, but -- for
whatever reason -- he chose to make this argument instead.

I happen to think that Searle does not think the behavior is
possible without real intentionality.  I could be wrong.
But arguments such as those presented below don't show that
I am.

>Jeff
>>That Searle is willing to postulate something in order to 
>>present an argument hardly shows he thinks it's actually the
>>case.  I don't see any reason to suppose Searle thinks the
>>behavior is perfectly possible without intentionality.  
>>Indeed, I seem to recall that he say the opposite.
>
>So Searle's argument becomes "Imagine a person following a program that
>allows them to imitate the conversation of a Chinese person. This isn't
>possible [*this is the part Jeff is adding*]
>because this system won't really understand Chinese. It won't 
>understand Chinese because programs
>only have syntax, and understanding requires semantics."

That is not at all what I am doing.  I am not adding to Searle's
Chinese Room argument.  I am saying it has a premise that Searle in
fact thinks is false.  There's nothing especially strange about
that, and I find it very hard to understand why you think it is
so significant.  

>Now what is the
>point of the Chinese room in this argument? Searle introduced the
>Chinese room to be a specific example of something that could in
>principle exist, could pass the Turing test, yet wouldn't really understand
>(thinking that everyone would agree it was intuitively obvious that there
>was no understanding in this scenario, which they didn't). 

The Chinese Room was introduced to be a specific example of a computer
running whatever program it was that AI researchers had developed.  It
is not at all necessary for Searle to believe that such a program be
possible in principle.

>If he doesn't
>think it could, even in principle, exist, then he has no argument
>against the Turing test here: and the concept of the Chinese room adds
>nothing whatsoever to the argument that the syntactic nature of programs
>means they are incapable of supporting the semantics of minds. 

You still seem to be in the grip of the idea that there's no
argument unless the person making the argument thinks the
premises are true, the reasoning correct, and so on.  If the
premises _are_ false, then, yes, the argument falls.  But not
if the person making the argument thinks they are false.
The most you should say is that Searle should not himself
conclude the the Turing Test doesn't work.

Now, if the behavior is impossible without intentionality, etc,
then the TT would work in the sense that whenever we find the
behavior there would be intentionality, etc.  But we can reach
this rehabilitation of the TT only after we can show that the
behavior is impossible without intentionality, etc.  Those who
think we should accept the TT right now have shown nothing of
the kind.  Instead, they (or most of them) think we should use
the TT because we can never tell whether "real intentionality"
is present.  They regard the question of real intentionality as
unscientific or even meaningless.

>On the other hand, if he thinks it could exist, then he loses what was the
>main argument that led to the rejection of behaviourism, the argument
>that human behaviour _can't_ be properly explained without postulating 
>mental processes as causes of that behaviour.

Behaviorism isn't the only way to explain behavior without
postulating mental processes.  Behaviorism was rejected in part
because it supposed that all we had to look at was things like 
operant conditioning.  That's why the rejection of Behaviorism
doesn't automatically transfer to the positions of, say, Chalmers
and McDermott.

> Okay, 'mental processes'
>is vague, and for some people is taken to be the kinds of processes
>computers can do; for others, things like intentionality, or even free
>will are taken to be necessary explanatory concepts. But if you _can_
>have the behaviour without those processes, then they are no longer
>_necessary_ explanatory concepts, and you need further justification for
>using them in the case of humans.

Once again, even if we accept the point that such behavior could be
explained without reference to those processes (in particular for the
cases where it occurs without them), that does not show that every
instance of the behavior can be explained without such reference.
There may be more than one way to get the behavior, and different
ways may require different kinds of explanation.

In particular, if computers could get the behavior by running the
right program but nonetheless didn't have "mental processes", and
yet humans did have mental processes, that would not show that
the mental processes have no real significance in the human case.
That some completely different system, without mental processes,
doesn't need mental processes hardly shows that a system that
does have mental processes doesn't need them either.  

That is, the "further justification" is that the mental processes are
present and seem to be involved in behavior.  That's not a proof
that the mental processes matter, but we're not really in a position
to prove anything either way at this point.

>Unfortunately, my previous attempt to reply to people discussing my 
>"Turing test is not Behaviourist" post seems to have disappeared. One
>point raised (by Bernie Simon) was that, even though understanding (he used 
>'intelligence') might be a causal factor in human ability to converse,
>it is a fallacy to say that therefore, anything that can converse must
>have understanding.

I agree.

> I agree it is a fallacy to say it _must_; but that
>does not make it faulty reasoning to say 'the ability to converse is
>very good evidence for understanding', which is as much as anyone really
>wants to claim, I think. 

It's certainly not the limit of what people actually do claim.

> He said that "unless we assume Behaviourism,
>which says that behaviour is identical with intelligence, the Turing
>Test fails", but this isn't so: we need only assume that intelligent
>behaviour is generally _caused_ by intelligence, so similar behaviour
>probably has a similar cause.

The TT would still fail, because it fails to show intelligence _is_
present, only that it's _probably present_.  So it couldn't be used to
show such things as that the Chinese Room system understands.

>Alan Smail pointed out that Turing himself was making no particular
>assumptions about what caused the behaviour ('behaviourist' external
>contingencies; 'cognitive' symbol processing; or 'mentalist' intentional
>abilities) when he proposed his test. This is true, Turing instead was
>defining an observable phenomenon (a computer that could converse so as
>to be indistinguishable from a human) and discussing whether such a
>thing was, in principle, possible. 

Not really.  He wasn't just considering whether the behavior
would be possible.  He was also suggesting that we substitute
this operational test for the question "can X think?" and
then consider it's application to the case where X is a machine.

>Most of the debate generated by
>Searle seems to accept it is possible, but disagree over whether it
>would mean that the computer has a comparable mental life to a human.
>_This_ debate does require assumptions about the causal role of this
>mental life for human behaviour; and my original point was that
>supporting the Turing Test generally involves assuming that this mental
>life does have a causal role, whereas Behaviourism assumes it has no
>causal role (is of no explanatory value).

I still don't see why supporting the TT generally involves assuming
that this mental life does have a causal role.  It often involves
assuming that questions about mental life are unscientific or
meaningless, or it involves saying that we don't really know
whether _other people_ have the mental life in question so
we shouldn't ask any more in the case of machines.  

>Searle's disagreement with AI might
>make more sense, or at least not be such a source of confusion, if he
>had actually addressed Turing's idea, and tried to demonstrate that 
>there are a priori reasons why a computer couldn't converse like a
>human. The 'Chinese Room' argument doesn't do this.

As you must know, it's difficult to establish limits on the
behavior computers are capable of producing.  If what you want
to show is that understanding is not just a matter of running
the right program, and you think can do that without having to
show that computers couldn't even produce the right behavior,
who no do so?  Especialy since other philosophers had already
tried to show that computers wouldn't be able to behave like
humans.

And why the restriction to a priori reasons?  

-- jd


