From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:22:09 EST 1992
Article 2759 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <5993@skye.ed.ac.uk>
Date: 15 Jan 92 20:59:18 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 115

In article <1992Jan15.185342.11589@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>>What I am saying in this thread is that
>>Searle thinks the behavior is not possible without understanding.
>>Maybe I'm wrong, of course, and a relevant quote from Searle
>>would show that I am.  I will also look for such direct evidence
>>on this point and let you know if I find it.
>
>From the reprint of  Searle's "Minds, Brains and Programs" in 
>"The Mind's I" Hofstadter&Dennett, 1981:

I turns out that I reread that paper the just other day, and
that I do not agree with your interpretation of it.  Indeed,
I suspect the reason you interpret it as you do is that you
are still convonced that Searle _must_ think the behavior is
possible without real understanding, becuase otherwise (you
say) "he has no basis for rejecting the Turing test".

>p360 "But precisely one of the points at issue is the adequacy of the
>Turing test. The example [the Chinese Room] shows that there could be
>two "systems", both of which pass the Turing test, but only one of which
>understands..."

The Chinese Room example does not, of course, show that there could
actually be such a system.  And Searle must know that.

>p371 "we are tempted to postulate mental states in the computer similar
>to human mental states. But once we see that it is both conceptually and
>empirically possible for a system to have human capacities in some realm
>without having any intentionality at all, we should be able to overcome
>this impulse ... in this paper, I have tried to show that a system could
>have input and output capabilities that duplicated those of a native
>Chinese speaker, and still not understand Chinese, regardless of how it
>was programmed"

Note "... and empirically possible ... in some realm".  We have
seen nothing of the sort if the realm is Chinese conversation.

Moreover, Searle has done nothing whatsoever to show that a system
that "could have input and output capabilities that duplicated those
of a native Chinese speaker, and still not understand Chinese"
could actually exist or be built.

If you take such passages as showing that Searle thinks such a
system is possible, you should also take them as showing that
Searle thinks he has "tried to show" that such a system could
be built.  But Searle hasn't tried to show that, and I don't
see any good reason to suppose that he thinks he has.

In particular, Searle has not tried to show that there could be a
program that produced the right behavior.  All he's done is assume
that the AI folk could produce such a program, so that he can
argue against their claim that such a program would be sufficient
for producing a mind.  

>I think these quotes are fair representations of Searle's position in
>this paper: Searle is commited to the belief that the Chinese room
>could exist, not merely proposing it for the sake of argument, otherwise
>he has no basis for rejecting the Turing test, which he wants to do.

Suppose he does, in fact, think that the behavior is possible without
the right internals.  It is nonetheless still wrong to say he is
commited to the belief that the Chinese room could exist because
otherwise he has no basis for rejecting the TT.

You've repeated that sort of claim several times, and suspect
it's one of the reasons you interpret Searle as you do.  It still
makes little sense to me, but evidently my attempts to explanation
haven't worked.  But perhaps it will work better if say it a different
way.

Let's suppose that I think the behavior is not possible without
intentionality.  (NB, I'm just picking intentionality as one of
the "usual words"; I don't mean for much of importance to depend
on that particular choice.)  Suppose I offer Searle's argument to
some advocate of the Turing Test.

You could use the very same arguments to show that _I_ am "commited to
the belief that the Chinese room could exist, not merely proposing it
for the sake of argument, otherwise [I have] no basis for rejecting the
Turing test".

Well, you might be able right to conclude a number of things about
me, but one thing you wouldn't be right to conclude is that I think
the behavior is possible without intentionality -- because (according
to the supposition) what I actually think is the opposite.

Another thing you wouldn't be right to conclude is that, because
I think the opposite, the argument I offered doesn't show that we
should reject the TT.  Perhaps, for instance, I am wrong in thinking
that the behavior is impossible without intentionality.

On the other hand, I could be right and nonetheless have "a basis for
rejecting the Turing Test".  After all, I don't yet know that I'm
right.  I don't yet know enough.  To adopt the TT now would be to
assume that I am right, that is, to beg the question.

>Oddly enough, the second quote comes from the paragraph where he accuses
>the Turing test of being "unashamedly behaviouristic and
>operationalistic". "Postulating mental states" in computers or in humans
>is hardly a typical activity for behaviourists.

I don't find it odd.  The TT _is_ operationalistic.  Do you really
want to dispute that?  So that leaves "behavioristic".  I'd find it
hard to understand your remark unless you thought the Turing Test
postulated mental states, something that would also be consistent
with what you've said in other messages.

The TT can be used as a "mental state detector", and it can be
used a way to avoid "unscientific" or "meaningless" questions
about whether something "thinks".  The latter use does not
require postulating mental states and indeed is compatible
with the view that any talk of mental states is a bunch of
pseudo-scientific nonsense.

-- jd


