From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!news.cs.indiana.edu!arizona.edu!NSMA.AriZonA.EdU!bill Mon Jan  6 10:30:16 EST 1992
Article 2465 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!news.cs.indiana.edu!arizona.edu!NSMA.AriZonA.EdU!bill
>From: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Intelligence testing
Message-ID: <1992Jan1.115429.2331@arizona.edu>
Date: 1 Jan 92 18:54:28 GMT
Reply-To: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Distribution: world,local
Organization: Center for Neural Systems, Memory, and Aging
Lines: 21
Nntp-Posting-Host: ca3.nsma.arizona.edu

With the new year, let's see if we can get some nice new
threads going.  Here's a proposal:

The Turing test has often been criticized as too weak, but in
my view it is actually much too stringent to be a good test
for machine intelligence.  Suppose, instead of applying it
to a computer, we apply it to an alien creature from the planet
Zeta Galactase -- we call the creature intelligent if and only
if it can imitate a human being on a teletype.  Obviously this
is human chauvinism of the rawest kind.  If it is unfair to
apply such a test to an alien creature, how can it be fair to
apply it to a computer?

Let us, then, avoid the negative emotions aroused by the
question "Can machines think?" and consider how we would
go about answering the question "Can the creature from Zeta
Galactase think?".

Any takers?

	-- Bill


