From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Mon Dec  9 10:48:29 EST 1991
Article 1910 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle (was Re: Daniel Dennett (was Re: Comme
Message-ID: <5812@skye.ed.ac.uk>
Date: 6 Dec 91 18:36:45 GMT
References: <94066@brunix.UUCP> <1991Nov24.201501.5845@husc3.harvard.edu> <JMC.91Nov24195716@SAIL.Stanford.EDU> <1991Dec5.203442.12030@cs.yale.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 106

In article <1991Dec5.203442.12030@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:

>To back John up (and oppose Zeleny, who needs to be opposed on grounds
>of preserving civilized discourse if nothing else), allow me to repost
>an essay on Searle and the Turing Test:

It's good to see someone on the anti-Searle side finally come out
against behaviorism and the Turing Test.

To back up Zeleny and McDermott, and oppose behaviorism and the Turing
Test, allow me to post a response.

I agree with the main points of McDermott's post, except for one.
There I agree with Zeleny (and Searle) instead of John (McCarthy).

McDermott is right to note that cognative science has to answer the
question "what is intelligence" rather than, as Turing did, substitute
a different question.  (I'm not so sure he's the only one to have
noticed this, but that's a minor point.)

He's also right (IMHO) to aim at knocking down Searle's argument
rather than at showing Strong AI is correct (indeed, in my opinion,
knocking it down is all that's possible at this time), right to say
that something might pass the Turing Test without actually being
intelligent, and right to call behaviorism "shameful in anyone who
believes in cognitive science".

But unfortunately, he pretty much abandons this admirable caution
(aiming only to knock down Searle's argument) when arguing that
there would be two understanders in the Chinese Room.

>      Computer executing algorithm:   0+1=1 understander
>      Person executing algorithm:     1+1=2 understanders
>
>The fact that these two understanders occupy the same body, and the
>way the two relate, should make us smile, not choke.

It's not a fact, yet, it's just something that might be possible
(so far as we know).

Moreover, if "in fact, we know almost nothing about what a
computational theory of mind would look like", then, I suggest,
we don't know that it will tell us there are two understanders.

I think McDermott is more or less right when he says

>if AI ever succeeds in producing algorithm $A$, presumably it will be due
>to the discovery of a nontrivial theory of understanding Chinese.  It
>is {\it this theory}, not Turing's Test, that will say whether an
>entity understands or does not understand Chinese.

But he goes on to say "one consequence of this theory is that any
entity that executes $A$ will understand Chinese".  There are a number
of problems here.  One is that we're suddently in the business of
counting entities (that's how we get "two understanders").  This may be
a difficult problem in its own right (and Zeleny may have something
useful to say about it).

A similar problme occurs in:

\footnote{$^2$}{By the
>way, his use of the term ``subsystem'' is an effort at disinformation.
>No such subsystem is proposed by his opponents; rather, the entire
>system embodies two understanders.}

How can we rule out the possibility that the computational theory of
mind (which we know almost nothing about) will call it a subsystem
after all?  Or, perhaps it will turn out that understanding can happen
only when systems are sufficiently independent.  Perhaps computers
can understand by running the right program, but Rooms, or
two-systems-in-a-Searle cannot.

Another problem is that we're still in the "system reply" branch
of the debate.  Perhaps we have to involve the "robot reply" as
well.  Perhaps the Chinese Room system wouldn't understand, because
it lacks the right sort of connections to the world.  I'm not
convinced of this myself, but given that "we know almost nothing
about what a computational theory of mind would look like", I
don't see how we can rule it out.

Where does this leave the Turing Test?  McDermott writes:

>The argument has to cover some ground, and has to hit a conclusion
>that everyone agrees is absurd.  Since Searle's argument doesn't do
>this, he is forced to drag in Turing's Test, and pronounce {\it it}
>absurd.  But it is crucial to realize that {\it he} brought Turing's
>Test in; it didn't come with the position he is attacking.  In other
>words, it is completely false that ``one of the points at issue is the
>adequacy of the Turing test.''

But it is not at all completely false.  The Turing Test is one of the
points at issue for many of the people involved.  One only has to look
at the arguments of the anti-Searle side of the repeated Usenet
discussions to see this.

Indeed, the reason we reached this point of the debate in the first
place may be because the people making the system reply didn't say:

  Maybe the system understands somehow, even though we have almost no
  idea how it would work and can't even be sure that it's possible.
  Searle hasn't shown that it's impossible.

Instead they (all too often) said something like "the system
understands".

-- jeff


