Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!vlsi_lib
From: vlsi_lib@netcom.com (Gerard Malecki)
Subject: Re: Strong AI and consciousness
Message-ID: <vlsi_libCzzzEo.4Lx@netcom.com>
Organization: VLSI Libraries Incorporated
References: <3b11sh$hod@cantaloupe.srv.cs.cmu.edu> <vlsi_libCzztB1.BGM@netcom.com> <3bdg5h$mjq@mp.cs.niu.edu>
Date: Mon, 28 Nov 1994 21:51:58 GMT
Lines: 31

In article <3bdg5h$mjq@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>In <vlsi_libCzztB1.BGM@netcom.com> vlsi_lib@netcom.com (Gerard Malecki) writes:
>
>>         The truth of whether one is conscious or not shouldn't be based
>>on other's subjective judgement. If you think you are conscious, you are.
>>That is the beauty of consciousness. It is self referential.
>
>There is a difficulty with this as a test of consciousness.  Suppose I
>construct a simple robot, and put inside it a simple tape player, causing
>it to repeatedly emit the words "I am conscious."   I doubt that
>you will consider it conscious.
>
I defined consciousness to be self referential. Your counter-argument
is not based on that definition. The issue of whether a robot
is conscious is best left to the robot itself, and not by someone
judging it by the sounds emanating from a tape recorder. If the robot
were unconscious, it wouldn't be thinking it is conscious (despite
the tape-recorded message). In fact it wouldn't be thinking anything at all.

>If I am right, then your simple test fails.  In practice, you assume
>that other people are conscious, and you assume it for reasons other
>than that they think they are conscious.  Having assumed that they
>are conscious, you interpret the words they emit as being reports of
>thinking. 

This is exactly the pitfall I pointed out. 


Shankar Ramakrishnan
shankar@vlibs.com

