From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:16:08 EST 1992
Article 3030 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan22.212913.6581@aisb.ed.ac.uk>
Date: 22 Jan 92 21:29:13 GMT
References: <1992Jan22.104726.18897@aifh.ed.ac.uk>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 178

In article <1992Jan22.104726.18897@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>In article <6024@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>I'm afraid I don't have time to reply piece by piece to your article,
>and besides I think any ideas that this thread might contain are getting
>buried under excess verbiage. 

I think you're right.  One of us was going to have to take this
step, and it might as well be you.  Still, I think I'd better reply
to this article of yours piece by piece, to make it easier to indicate
just where we disagree.  I think there's actually more agreement
than one might have expected.

> * Several people have said that the Turing test is bad because it is
>behaviourist (and everyone knows Behaviourism is Bad).

Fair enough.  All that I'd add is that the word "behaviorism" is
sometimes used to mean something other than behaviorist psychology,
properly so-called.  I don't think these other uses of "behaviorism"
are completely unjustified, but there's no doubt that they can be
misunderstood.

> * Behaviourism is generally considered to be bad (and rejected in favour
>of cognitive psychology) because it denies that mentality and/or
>cognitive processes have any explanatory role for human behaviour.

I think that's more or less right (and you must know more about
the history of it than I do), but I'm not sure that other ways of
explaining behavior without reference to cognitive processes would
necessarily suffer the same fate.  An explanation based on physics and
chemistry, or one in computational terms, might seem more reasonable
than one that emphasized conditioning.

> * Accepting the Turing test does not require denying that mentality has
>an explanatory role for human behaviour: in fact the idea that "the
>behaviour is strong evidence for the mentality" seems to follow quite
>obviously from the idea that "mentality is involved in any plausible
>explanation of the behaviour". 

I agree.

>                                Of course, this reasoning doesn't make
>the Turing test _sufficient_ because in principle there could be an
>alternative way the behaviour could come about.

Still agree.

>                                                  But such alternatives
>may be considered so unlikely that the Turing test may be taken to be
>sufficient _in practice_.

It's certainly true that it might be taken as sufficient in practice,
especially if no strong reasons to think otherwise turn up.  

But I don't think it's guaranteed that that's how we'll feel once we
actually have to deal with computers that pass the Turing Test.  It
might be, for example, that we have several different kinds of
programs that enable the computer to pass the Turing Test and that
their differences are such that we decide only some of them should
count as really understanding.  

Or perhaps the Turing Test doesn't reveal a significant difference but
some other test (perhaps even a behavioral test) does.

Or perhaps it just starts to seem less plausible that the TT is
sufficient, because of some other properties of the machines.  (Maybe,
for instance, they can be manufactured to have certain political
views, and we start to think they don't have free will but still think
humans do.)

In short, I think it's possible (not certain, of course, just possible)
that we will not think the TT sufficient in practice once we know more
about machines that pass the Turing Test, and in particular once we
know how to build them and have some to examine.

>  I admit (I think I did already) that my initial statement that
>accepting the Turing test was incompatible with Behaviourism was too
>strong. A Behaviourist might accept the test because they consider the
>behaviour to be all there is. 

I agree.

>                              However, I don't think that the pragmatic
>approach of "If my computer passes the Turing Test, I don't care if it
>really thinks or not" is equivalent to adopting this behaviourist
>outlook, because it says nothing at all about what sort of things may be
>involved in explaining the behaviour (sufficiently so to imitate it).

I agree with that too.

>I think this is one of the main places where Jeff would disagree, i.e.
>he would say that the pragmatic approach is a behaviourist one.

Since I've just been agreeing with you, I'm not sure that's right;
but perhaps I'd better make some more precise distinctions.

I agree that "don't care" is not Behaviorist.  However, pragmatism
isn't the only way to arrive at "I don't care".  Someone might suspect
that computers don't think, and decide that the TT doesn't show otherwise
(even in a pragmatic sense), but just not think it matters.

Moreover, if by "the pragmatic approach" you mean taking the TT as
sufficient in practice, I don't think that is necessarily behaviorist
even in a small-b sense.

I'm also willing to agree that only big-B, behaviorist psychology,
behaviorism is the real, properly so-called, behaviorism.  (We might
also want "behaviorism" to include "logical behaviorism", but I don't
think the question of whether it's behaviorism or not has been part
of our disagreement.)

The kinds of arguments I've sometimes called (small-b) behaviorist,
and that I think have some motivation and arguments in common with
(big-B) behaviorism, are ones that say we have to accept the TT
because there's no way to test for "real understanding" or because
the idea of "real understanding" is unscientific or ill-defined.

There are also arguments, less close to behaviorism, that threaten
us with skepticism about other minds if we do not accept the TT.
The idea seems to be that we use the TT to conclude that other
people have minds (or understanding or whatever) and so we should
do the same for machines.  Since the sort of behavior that can
be expressed via a teletype is not the only reason we can have for
concluding that other humans have minds, this again seems an
excessive emphasis on behavior.

Moreover, I don't think either of those kinds of arguments is
pragmatic.

As far as taking the TT as sufficient in practice is concerned,
I tried to give some idea of what I thought of that above.  I think
we have to leave open the possibility that, even if we now think the
TT might be sufficient in practice, we might change our minds once
we know more.

> * Rejecting the Turing test is to say (at the very least) "the
>behaviour is not sufficient evidence for the mentality". It seems to
>directly follow from this that "it is conceivable that some alternative
>means of obtaining the behaviour exists". I thought Jeff was disputing
>this step, but I now suspect what he was objecting to was the stronger
>statement that "rejecting the Turing test requires a coherent concept of
>an alternative means of obtaining the behaviour".

I probably would dispute the stronger statement (though it might
depend on just what was implied by "coherent"); but I don't dispute
the weaker one (ie, that's conceivable some alternative exists).

Our main disagreement, in my opinion, was that I was saying someone
(eg, Searle) could believe certain things even though they made
arguments that seemed to suppose the opposite -- and you seemed to
be denying this.  Note that I say "believe" and "made arguments".
So it isn't, at least not directly, a matter of conflicting beliefs.
But even if it were a matter of conflicting beliefs, so what?  People
can have inconsistent beliefs, can't they?  So this whole dispute
seemed rather weird.

>Now, I realise that this is not required if all you want to do is to
>point out that the Turing Test is _in principle_ insufficient. However,
>arguing that the Turing test is insufficient in practice does raise this
>problem. But if someone can propose a coherent alternative means (such
>as Searle's 'meaningless symbol manipulation') for obtaining the
>behaviour, then this constitutes an alternative explanation for the
>behaviour in humans as well, which creates the new problem of explaining
>why the alternative is plausible for computers but not for humans. I
>don't think Searle has adequately explained this.

I don't think there's space in this message for a discussion of
whether Searle needs to explain this.

However, I should point out that I don't think it's the case that
all coherent alternatives have to be alternatives that could apply
to humans.  So I don't think that a coherent alternative would
necessarily constitute an explanation for human behavior.  It may
still, of course, be necessary to show why the alternative isn't an
alternative for humans; on the other hand, it may be trivial to do 
so.

-- jd


