From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Mon Dec 16 11:00:54 EST 1991
Article 2017 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <5841@skye.ed.ac.uk>
Date: 10 Dec 91 19:25:09 GMT
References: <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> <5727@skye.ed.ac.uk> <YAMAUCHI.91Nov27203011@magenta.cs.rochester.edu> <5739@skye.ed.ac.uk> <1991Dec6.020944.4967@syacus.acus.oz.au> <5816@skye.ed.ac.uk> <1991Dec9.140719.28708@aifh.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 75

In article <1991Dec9.140719.28708@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>One thing I find odd in discussions of the Turing test is that people
>accuse it of being behaviourist. For example, Jeff Dalton, who I think
>was responsible for the subject line; or Drew McDermott, who's
>interesting article contained the line "anyone who thinks the Turing
>test is an interesting test for intelligence is guilty of behaviourism".

"Behaviorism" doesn't have to have a capital "B".  I use it as a
convenient way to refer to the view that something with the right
behavior automatically counts as having understanding.  In some cases,
I've taken the trouble to use something more elaborate than just
"behaviorism"; then I decided no one was being confused.  Guess I
was wrong.

>Behaviourism does get rather misrepresented, but I take it that what
>these writers and others mean by "behaviourism" is the belief (credited to
>Skinner and his movement in psychology) that "all human behaviour can be
>understood without reference to any internal mental states or processes".

A behaviorist in the sense I explained above does not have to think
that all behavior can be explained without reference to any internal
mental states or processes.  Indeed, programs that pass the Turing
Test (for example) might have all kinds of internal states and
processes and behavior that couldn't be explained without reference
to them.

>Now, if you don't accept the Turing test (or some other criteria of
>`identical behaviour') as being sufficient to attribute mental processes
>to the entity that exhibits that behaviour, then you are suggesting that
>it is quite possible for something to behave exactly as a human does
>_without_ it having certain mental processes (conciousness, understanding,
>whatever).

When arguing against the Turing Test, there are two prongs of attack.
(1) Programs could never pass it.  (This one is hard to prove and may
well be false.)  (2) Even if they did, they might nonetheless fail
to understand / be conscious / whatever.  (This is what Searle argues,
but it's also hard to prove.)

Some people reply to (2) by saying "oh no, if it had the behavior,
that (alone) shows it undertstands.  In arguing against that, I am
suggesting that it's possible to have the behavoior without the
understanding, so far as we now know; though I'm not sure I'd agree
with your notion of mental processes.

>In that case, why are we postulating these mental processes
>when we try to explain the behaviour of humans?

This is very strange.  You make it sound as if we're non-humans
(perhaps from Mars) and so have to say things like "What could
possibly make humans behave this way?  Maybe they're conscious,
like us.  (Or not, depending on your view of martians.)"

If you want to be a Behaviorist with a big "B" and suppose that we
don't actually have understanding or consciousness, then go ahead;
but I hope no one will take you seriously.  For many (most?) of
us, understanding and conscious are part of what's to be explained.

>Why don't we just look for the explanation that doesn't require
>all these problematic mental processes? Why, in short, don't we
>subscribe to Behaviourism? 

You seem to be supposing (as above) that having programs with
the right behavior would provide an explanation without reference
to "problematic" mental processes.  That might not be the case
at all.  Indeed, for many in Cog Sci, the hope is that programs
would provide an explanation for understanding and consciousness.

>There are problems with the Turing test as a criteria for intelligence,
>but just because you disagree with it, and also disagree with
>behaviourism, doesn't mean that it is therefore behaviourist. 

No one has ever argued that way, so far as I can tell.

-- jd


