From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!gatech!mcnc!ecsgate!lrc.edu!lehman_ds Tue Jan 28 12:16:58 EST 1992
Article 3091 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!gatech!mcnc!ecsgate!lrc.edu!lehman_ds
>From: lehman_ds@lrc.edu
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan23.111212.148@lrc.edu>
Date: 23 Jan 92 16:12:12 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk>  <1992Jan22.104726.18897@aifh.ed.ac.uk>
Organization: Lenoir-Rhyne College, Hickory, NC
Lines: 71

In article <1992Jan22.104726.18897@aifh.ed.ac.uk>, bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
> In article <6024@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
> 
> I'm afraid I don't have time to reply piece by piece to your article,
> and besides I think any ideas that this thread might contain are getting
> buried under excess verbiage. Instead I will try to clearly and
> concisely set out my points, and hopefully pinpoint more accurately the
> points at which you disagree. If you would care to do the same, perhaps
> we can then leave it up to the readers of this thread to make up their
> own minds?
> 
>  * Several people have said that the Turing test is bad because it is
> behaviourist (and everyone knows Behaviourism is Bad).
> 
>  * Behaviourism is generally considered to be bad (and rejected in favour
> of cognitive psychology) because it denies that mentality and/or
> cognitive processes have any explanatory role for human behaviour.
>  
>  * Accepting the Turing test does not require denying that mentality has
> an explanatory role for human behaviour: in fact the idea that "the
> behaviour is strong evidence for the mentality" seems to follow quite
> obviously from the idea that "mentality is involved in any plausible
> explanation of the behaviour". Of course, this reasoning doesn't make
> the Turing test _sufficient_ because in principle there could be an
> alternative way the behaviour could come about. But such alternatives
> may be considered so unlikely that the Turing test may be taken to be
> sufficient _in practice_.
> 
>   I admit (I think I did already) that my initial statement that
> accepting the Turing test was incompatible with Behaviourism was too
> strong. A Behaviourist might accept the test because they consider the
> behaviour to be all there is. However, I don't think that the pragmatic
> approach of "If my computer passes the Turing Test, I don't care if it
> really thinks or not" is equivalent to adopting this behaviourist
> outlook, because it says nothing at all about what sort of things may be
> involved in explaining the behaviour (sufficiently so to imitate it). I
> think this is one of the main places where Jeff would disagree, i.e. he
> would say that the pragmatic approach is a behaviourist one.
> 
>  * Rejecting the Turing test is to say (at the very least) "the
> behaviour is not sufficient evidence for the mentality". It seems to
> directly follow from this that "it is concievable that some alternative
> means of obtaining the behaviour exists". I thought Jeff was disputing
> this step, but I now suspect what he was objecting to was the stronger
> statement that "rejecting the Turing test requires a coherent concept of
> an alternative means of obtaining the behaviour".
> 
> Now, I realise that this is not required if all you want to do is to
> point out that the Turing Test is _in principle_ insufficient. However,
> arguing that the Turing test is insufficient in practice does raise this
> problem. But if someone can propose a coherent alternative means (such
> as Searle's 'meaningless symbol manipulation') for obtaining the
> behaviour, then this constitutes an alternative explanation for the
> behaviour in humans as well, which creates the new problem of explaining
> why the alternative is plausible for computers but not for humans. I
> don't think Searle has adequately explained this.
> 
> BW
  I think this pretty much states the arguments as they are currently.
But I also think there is a debate about the system as a whole understanding
Searle's Chinese Room.  I have this to add:
    When we looked at the room throughout this thread, we took into
consideration - the room, the symbols, and the human.  What we did NOT
take into consideration is the part of the system which is not ACTIVLY at work 
here : The intelligence that set up the rules for the Chinese Room.  This 
in itself should be considered.  The room itself may not "understand", but
the symbol manipulations came from an "intelligent" agent.
   Besides, if I have trouble getting my point across in Chinese, am I not
intelligent?
    Drew Lehman
    Lehman_ds@lrc.edu


