From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Jan  6 10:30:18 EST 1992
Article 2468 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Intelligence testing
Message-ID: <1992Jan1.230017.13907@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan1.115429.2331@arizona.edu>
Date: Wed, 1 Jan 92 23:00:17 GMT
Lines: 24

In article <1992Jan1.115429.2331@arizona.edu> bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:

>The Turing test has often been criticized as too weak, but in
>my view it is actually much too stringent to be a good test
>for machine intelligence.  Suppose, instead of applying it
>to a computer, we apply it to an alien creature from the planet
>Zeta Galactase -- we call the creature intelligent if and only
>if it can imitate a human being on a teletype.  Obviously this
>is human chauvinism of the rawest kind.  If it is unfair to
>apply such a test to an alien creature, how can it be fair to
>apply it to a computer?

There's a terrific paper on this subject by Bob French in Mind last
year (1990, I mean).  He argues that any program would be
unmasked as non-human by a sufficently rigorous Turing Test, but
for reasons that are unimportant to intelligence.  The reference is:

French, R.M. 1990.  Subcognition and the limits of the Turing test.  Mind
99:53-66.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


