From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!edcastle!aisb!aisb!smaill Mon Jan  6 10:30:33 EST 1992
Article 2496 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!edcastle!aisb!aisb!smaill
>From: smaill@aisb.ed.ac.uk (Alan Smaill)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <SMAILL.92Jan4204739@sin.aisb.ed.ac.uk>
Date: 4 Jan 92 20:47:39 GMT
References: <1992Jan1.115429.2331@arizona.edu> <BSIMON.92Jan2070527@elvis.stsci.edu>
	<1992Jan3.122235.26340@aifh.ed.ac.uk>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Distribution: world,local
Organization: DAI, University of Edinburgh
Lines: 25
In-Reply-To: bhw@aifh.ed.ac.uk's message of 3 Jan 92 12:22:35 GMT


In article <1992Jan3.122235.26340@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:

   The Turing test claims that if a machine could behave convincingly like
   a human (in the use of language) then it must (or at least, is very
   likely to) do so because it has similar internal mental processes
   ('thinking' or 'consciousness' or 'understanding') to those of a human.
   I.e. the behaviour is clear evidence of the internal processes. 

In Turing's original paper, he proposes replacing the question
"Can machines think?" with "can a machine pass the Turing test?".
It's pretty clear that he doesn't regard these as equivalent.

Do you think that Turing himself makes something like
the claim you mention in his article?  This is not how I remember
the paper.


 

--
Alan Smaill,                       JANET: A.Smaill@uk.ac.ed             
Department of Artificial           ARPA:  A.Smaill%uk.ac.ed@nsfnet-relay.ac.uk
       Intelligence,               UUCP:  ...!uknet!ed.ac.uk!A.Smaill
Edinburgh University. 


