Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!europa.eng.gtefsd.com!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Strong AI and consciousness
Message-ID: <jqbD0D5ou.pH@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <vlsi_libCzqJzE.HpA@netcom.com> <3b0n0h$ite@news1.shell> <3b1b7t$3el@mp.cs.niu.edu> <CzsC2u.35w@cogsci.ed.ac.uk>
Date: Tue, 6 Dec 1994 00:36:29 GMT
Lines: 16

In article <CzsC2u.35w@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In practice, we will continue to distinguish between rocks and
>humans and will not hold back from crushing rocks because it
>might disrupt their self-interpretations.

So what?  What do human sentiments say about matters of fact?

>Why should looks matter.  Surely comp.ai.phil readers can overcome
>that kind of prejudice these days.

If we have a robot that behaves like a human, why do we ask of it a question
("is it really conscious?") that we ask of no human, other than based upon
how it looks?
-- 
<J Q B>
