From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor Thu Feb 20 15:20:54 EST 1992
Article 3746 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Intelligence Testing
Message-ID: <1992Feb14.214304.21507@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Jan31.142711.17883@oracorp.com> <6184@skye.ed.ac.uk>
Date: Fri, 14 Feb 1992 21:43:04 GMT

In article <6184@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Jan31.142711.17883@oracorp.com> daryl@oracorp.com writes:
>>
>>Jeff, the discussion is always about the *sufficiency* of the Turing
>>Test, not its necessity. 
>
>You seemed to be saying you relied entirely on behavior.  I don't
>think that's so.  I think that you, like me, conclude people are
>conscious before they pass any Turing Test.  What basis do you
>use then?  It's certainly not the Turing Test.
>
What basis do you use? It seems to me that yourself, myself and everone else 
concludes that other people are conscious on basis of their _behaviour_,
and TT is a subset of this observational test. Fact that other people look like
us is also relevant but only slightly - if someone behaved like a zombie you
would most likely conclude that he is not conscious even though he might 
look human.

>-- jd


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


