From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aifh!bhw Mon Jan  6 10:30:29 EST 1992
Article 2490 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan3.122235.26340@aifh.ed.ac.uk>
Date: 3 Jan 92 12:22:35 GMT
References: <1992Jan1.115429.2331@arizona.edu> <BSIMON.92Jan2070527@elvis.stsci.edu>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Distribution: world,local
Organization: Dept AI, Edinburgh University, Scotland
Lines: 58

In article <BSIMON.92Jan2070527@elvis.stsci.edu> bsimon@elvis.stsci.edu 
(Bernie Simon) writes:
>The Turing test is a behavioural test for intelligence and it is
>unconvincing as a test for the same reason that Behaviourism is
>unconvincing as an explanation of intelligence.

Sorry to repeat my comments of a few weeks ago, but obviously the
message isn't getting through: the Turing test may test 
behaviour, but it is _not_ a Behaviourist test!

Behaviourism as an approach to studying behaviour claimed that all
behaviour, including human intelligence, could be explained without
reference to any internal 'mental' 'cognitive' processes (rather it
could be explained by what Skinner called 'contigencies of
reinforcement'). It was by and large rejected in favour of theories of
behaviour that postulate (and seek to determine the nature of) 
mental processes. Computers contributed by providing a model of the
sorts of 'processes' that might be involved. 

The Turing test claims that if a machine could behave convincingly like
a human (in the use of language) then it must (or at least, is very
likely to) do so because it has similar internal mental processes
('thinking' or 'consciousness' or 'understanding') to those of a human.
I.e. the behaviour is clear evidence of the internal processes. 

A Behaviourist would most likely respond that the behaviour is perfectly
possible without such processes --- because they believe it occurs in
humans without such processes. However, they would probably object that the
causes of the behaviour in the computer are of completely the wrong kind
for it to be interesting for understanding human behaviour (unless
perhaps it had been implemented using a huge reinforcement neural net).

Searle and others also seem to think that the behaviour is perfectly
possible without such processes (without 'real' intentionality,
consciousness, thinking). If it _is_ possible, then what is the
justification for supposing that such processes are necessary to explain
human behaviour? Why not go along with the behaviourists? 
The only reply is to appeal to the intuitive obviousness that these 
processes occur in ourselves. They can't claim that these processes are
in any way necessary for our intelligent behaviour, because this would
mean that anything exhibiting the behaviour must also have them.

In summary, the Turing test (which ascribes 'thinking' to anything that
exhibits the intelligent behaviour of a human) is based on the
assumption that such internal processes are _necessary_ for the
behaviour to occur. This completely contradicts the Behaviourist
doctrine, as above, that the behaviour is in _no way_ explained by
referring to such internal processes.

One of the replies to my previous posting said that, well, they were
using small-b behaviourist to mean 'someone that believed that observing
the behaviour was sufficient to ascribe mental states (conciousness)'.
This is perhaps a common usage, but has nothing to do with Behaviourism,
or with the reasons Behaviourism was rejected. It also seems pointlessly
tautological to append the description 'behaviourist', in this sense,
to the Turing test.

BW


