Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Strong AI and consciousness
Message-ID: <D0EM5K.3n8@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3b1b7t$3el@mp.cs.niu.edu> <CzsC2u.35w@cogsci.ed.ac.uk> <jqbD0D5ou.pH@netcom.com>
Date: Tue, 6 Dec 1994 19:29:43 GMT
Lines: 32

In article <jqbD0D5ou.pH@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <CzsC2u.35w@cogsci.ed.ac.uk>,
>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>In practice, we will continue to distinguish between rocks and
>>humans and will not hold back from crushing rocks because it
>>might disrupt their self-interpretations.
>
>So what? 

So what's the point of Moravec's mappings?  Will we ever accept the
view that rocks might be conscious in practice?

> What do human sentiments say about matters of fact?

Why does that matter here?  You will note that I don't say what we'll
continue to do will be right.

>>Why should looks matter.  Surely comp.ai.phil readers can overcome
>>that kind of prejudice these days.
>
>If we have a robot that behaves like a human, why do we ask of it a question
>("is it really conscious?") that we ask of no human, other than based upon
>how it looks?

I've already said it's how it works, not how it looks.

In any case, we do ask the question of humans.  That's why there's
an "other minds problem".

-- jeff


