From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!news!noc.near.net!garbo.ucc.umass.edu!dime!orourke Tue Jan 28 12:18:13 EST 1992
Article 3180 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!news!noc.near.net!garbo.ucc.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <42304@dime.cs.umass.edu>
Date: 27 Jan 92 17:34:12 GMT
References: <11927@optima.cs.arizona.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 27

In article <11927@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman)
writes (in response to Neil Rickert):
>...No, I have a third choice.  I can simply observe that there is no
>known method of determining consciousness _in others_ which is free of
>_believing that they are like you_, and reserve my opinion on whether
>such a thing might ever be possible.  This, coincidentally, is the
>choice I take.

But it seems to me the issue is, how much must they be like you
for you to ascribe consciousness to them?  It seems the following
set of beliefs are coherent (I am not claiming I believe them all):

1. Understanding (grasping meanings of) is impossible without
   consciousness.

2. It is possible that consciousness does not require biological tissue.

3. As a result of a deep Turing Test -like conversation with a machine,
   you have to admit that it seems the machine grasps meanings.

4. Since you believe (1), you are led to wonder if perhaps the machine
   is conscious.

5. This does not contradict (2).

6. So you wonder if the machine is perhaps enough like you that it could
   be conscious.


