Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Strong AI and consciousness
Message-ID: <jqbD0Ewxx.D0M@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3b1b7t$3el@mp.cs.niu.edu> <CzsC2u.35w@cogsci.ed.ac.uk> <jqbD0D5ou.pH@netcom.com> <D0EM5K.3n8@cogsci.ed.ac.uk>
Date: Tue, 6 Dec 1994 23:22:45 GMT
Lines: 95

In article <D0EM5K.3n8@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <jqbD0D5ou.pH@netcom.com> jqb@netcom.com (Jim Balter) writes:
>>In article <CzsC2u.35w@cogsci.ed.ac.uk>,
>>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>>In practice, we will continue to distinguish between rocks and
>>>humans and will not hold back from crushing rocks because it
>>>might disrupt their self-interpretations.
>>
>>So what? 
>
>So what's the point of Moravec's mappings?  Will we ever accept the
>view that rocks might be conscious in practice?

I have no knowledge of or interest in Hans Moravec's mappings, so why
ask me?  Some people accept that view, but I don't and probably never
will.  But what I asked, and as far as I can see you didn't answer, is
why your statement above is cogent; why it fits the argument (which
you omitted, I don't remember, and I no longer care at this point).
Why did you say the particular thing you did?  What is its relevance?
You are so damn arrogant, you think that if you say "bull", everyone
understands what you mean.  If you say "our sense of consciousness",
everyone understands what you mean.  Every time you make a statement
or use a term, it is so clear what you mean by it, that you find it
offensive to be asked for clarification, and answer questions with
questions.

>> What do human sentiments say about matters of fact?
>
>Why does that matter here?  You will note that I don't say what we'll
>continue to do will be right.

Again you answer a question with a question.  It matters here because you
expressed a human sentiment in a debate about matters of fact.  What is the
weight of this sentiment?  Why do we care?  Aliens may crush us without
concern for our self-interpretations, but you have said that doesn't affect
the matter of fact of whether we have them.  Why are the actions and
interpretations of aliens irrelevant, but not those of humans?  Why is what
we do in practice relevant to what it means to have self-interpretations?
And if it isn't relevant, why did you bring it up?

It seems to me that you make many statements like "bull" or "I agree"
or "there are other possibilities" or "I used to defend the TT but now
I don't" or "humans crush rocks".  These aren't arguments.  Without
explication, they aren't interesting statements, except perhaps as any opinion
is interesting to the person holding it. That's the height of arrogance.
The point of discussions in c.a.p, is to clarify, with understanding as the
goal.  This is why I challenge you so often, because you make so many statements
which to me seem muddy or confused, and leave discussions in disarray.
You respond by saying "you're being hostile", which you are entitled to think,
but you use it as an excuse to dodge critical questions.  Which is one of
the things I'm hostile to.

>>>Why should looks matter.  Surely comp.ai.phil readers can overcome
>>>that kind of prejudice these days.
>>
>>If we have a robot that behaves like a human, why do we ask of it a question
>>("is it really conscious?") that we ask of no human, other than based upon
>>how it looks?
>
>I've already said it's how it works, not how it looks.

So we ask the question of it because of how it works?  But we don't know
how it works, so that can't be the reason!  Once again, you don't answer
the question, "why do we ask", you just knee-jerk respond to the phrase
"how it looks".  Fine, since we both agree that "how it looks" is not a good
answer, I'll leave that off, and ask the question again:

If we have a robot that behaves like a human, why do we ask of it a question
("is it really conscious?") that we ask of no human?

>In any case, we do ask the question of humans.  That's why there's
>an "other minds problem".

The "other minds problem" is "How can we know the other is
conscious?", not "Is the other conscious?".  Who's the last person of
whom you asked, "Is s/he conscious?"?  We *assume* that humans are
conscious; the "other minds problem" asks on what basis we make that
assumption.  If the concern about robot consciousness were simply the
other minds problem in general, then it wouldn't be relevant to AI at
all.  The other minds problem comes about, IMO, because we are
confused about the nature of consciousness.  Trying to answer the
question of why we assume things about human beings that we don't
assume of things that act like human beings may help shed some light.

Imagine you walk into a room where a lively conversation about mind,
consciousness, qualia, the usual c.a.p crap, is taking place.
At some point, one of the participants turns around; it's Data; he's got
pasty skin and weird eyes.  He pops open a compartment in his head and there
are flashing lights.  Now, before noticing this, was there any question
of whether any of the participants in the conversation were conscious?
What about afterwards?  If so, why?  Note that we have already ruled out
"Data appears to be an artificial life form" as a valid answer.
-- 
<J Q B>
