From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew Mon Dec 16 11:00:57 EST 1991
Article 2022 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <1991Dec10.213012.16420@cs.yale.edu>
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: atlantis.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <1991Dec6.020944.4967@syacus.acus.oz.au> <5816@skye.ed.ac.uk> <1991Dec9.140719.28708@aifh.ed.ac.uk>
Date: Tue, 10 Dec 1991 21:30:12 GMT
Lines: 53

  In article <1991Dec9.140719.28708@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
  >One thing I find odd in discussions of the Turing test is that people
  >accuse it of being behaviourist. For example, Jeff Dalton, who I think
  >was responsible for the subject line; or Drew McDermott, who's
  >interesting article contained the line "anyone who thinks the Turing
  >test is an interesting test for intelligence is guilty of behaviourism".

I should have said "anyone who thinks that passing the Turing test is
a criterion for intelligence is guilty of behaviorism."

I take Searle to be imputing this belief to his opponents when he says
(quoting from the Scientific American article again): "The Turing test
... is simply this: if a computer can perform in such a way that an
expert cannot distinguish its performance from that of a human who has
a certain cognitive ability .... then the computer also has that
ability."  

I don't really want to claim that the Turing test isn't an
"interesting test," and I apologize for the woolliness.

  >Now, if you don't accept the Turing test (or some other criteria of
  >`identical behaviour') as being sufficient to attribute mental processes
  >to the entity that exhibits that behaviour, then you are suggesting that
  >it is quite possible for something to behave exactly as a human does
  >_without_ it having certain mental processes (conciousness, understanding,
  >whatever). In that case, why are we postulating these mental processes
  >when we try to explain the behaviour of humans? Why don't we just look
  >for the explanation that doesn't require all these problematic mental
  >processes? Why, in short, don't we subscribe to Behaviourism? 
  >
  >On the other hand, to claim, in the Turing test, that nothing could behave
  >exactly like a human unless it had something like the mental processes
  >or mind of a human, is to claim that the existance of these processes is
  >necessary for human behaviour, that understanding these processes is
  >necessary for understanding human behaviour. This sounds like an
  >extremely anti-Behaviourism stance to me (in fact, rather more
  >anti-behaviourist than I am willing to go along with, but that's another
  >topic).

You're right.  However, much of the controversy regarding the Turing
test revolves on whether there is anything to (say) consciousness
beyond the ability to display a certain behavior.  Some people appear
to embrace the idea that the ability to display the behavior =
consciousness.  Such people then insist that Turing-style tests are
the only criteria for consciousness we could ever have.

As you correctly point out, one could perfectly well believe that
consciousness was a quality that transcended behavior and also believe
that a certain behavior was infalliby correlated with it.  Without a
better theory of the role consciousness plays in the operation of
minds, I will suspend judgement.

                                             -- Drew McDermott


