Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!Germany.EU.net!EU.net!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Strong AI and consciousness
Message-ID: <CzsC2u.35w@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <vlsi_libCzqJzE.HpA@netcom.com> <3b0n0h$ite@news1.shell> <3b1b7t$3el@mp.cs.niu.edu>
Date: Thu, 24 Nov 1994 18:44:53 GMT
Lines: 54

In article <3b1b7t$3el@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>In <3b0n0h$ite@news1.shell> hfinney@shell.portal.com (Hal) writes:
>
>>The problem is this:
>
>>A) Whether a machine is running a certain program is a subjective
>>   judgement.  There is no right or wrong in the matter.  It depends
>>   on how you look at it, how you interpret what is happening.
>
>>B) A machine running the proper program becomes conscious.  (This is
>>   the strong AI principle.)
>
>>C) Whether something is conscious or not is not a subjective matter.
>>   We all know from personal experience that there is no room for
>>   doubt about our own consciousness.  This is a question where there
>>   is a right answer and a wrong answer.  Bill Clinton is conscious,
>>   and anyone who denies it is wrong.
>
>>Now, I believe, to a considerable degree, all three of these statements.
>>Yet they seem to contradict each other.  This poses a dilemma for me.
>>Do other people feel this way?
>
>Your criteria are well stated.  Evidently (C) contradicts (A)+(B).
>My claim is that C is incorrect.  I won't go into full details, since
>Hans did that rather well.

But he didn't show C was incorrect.  He said there were
interpretations, but didn't show the existence of interpretations
was all there was to it.  Indeed, how could he show that?

In practice, we will continue to distinguish between rocks and
humans and will not hold back from crushing rocks because it
might disrupt their self-interpretations.

>The problem, I think, is that you are testing (C) for reasonableness
>by assuming that a human is being tested.  Try thinking about it when
>a robot is being tested, or when an alien visitor from outer space is
>being tested.  Think of the alien as not looking at all human.  In
>fact, think of it as looking grotesque.

Why should looks matter.  Surely comp.ai.phil readers can overcome
that kind of prejudice these days.

>Our reaction to the testing of other humans is special.  We are
>members of the same species.  Not only that, for we are a highly
>social species, and tend to think of other humans as members of our
>extended families.  This means that we give them a great deal of
>benefit of the doubt.  We would not be so considerate of robots
>or aliens.

Or rocks?

-- jeff

