Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Strong AI and consciousness
Message-ID: <CzsBo6.2xH@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3b0n0h$ite@news1.shell> <3b11sh$hod@cantaloupe.srv.cs.cmu.edu>
Date: Thu, 24 Nov 1994 18:36:05 GMT
Lines: 67

In article <3b11sh$hod@cantaloupe.srv.cs.cmu.edu> hpm@cs.cmu.edu writes:
>
>>The problem is this:
>>
>>A) Whether a machine is running a certain program is a subjective
>>   judgement.  There is no right or wrong in the matter.  It depends
>>   on how you look at it, how you interpret what is happening.
>>
>>B) A machine running the proper program becomes conscious.  (This is
>>   the strong AI principle.)
>>
>>C) Whether something is conscious or not is not a subjective matter.
>>   We all know from personal experience that there is no room for
>>   doubt about our own consciousness.  This is a question where there
>>   is a right answer and a wrong answer.  Bill Clinton is conscious,
>>   and anyone who denies it is wrong.
>>
>>Now, I believe, to a considerable degree, all three of these statements.
>>Yet they seem to contradict each other.  This poses a dilemma for me.
>>Do other people feel this way?
>
>Well stated.
>
>I agree with A and B.
>
>Like Neil, I disagree with C.  Consciousness, like beauty, is a purely
>subjective interpretation put on a process.  A highly intelligent
>alien might well prefer to interpret you as a complicated windup toy,
>especially if it was so psychologically different from you that it had
>no referents for your normal mental experiences.

What aliens might prefer does not determine what it the case.

Then comes a longish section with which I agree ...

>What makes the issue so interesting and confusing is that, interpreted
>as a consciousness, a mechanism has the means to make interpretations,
>and in those interpretations, interprets ITSELF as a consciousness.
>[...]
>Though evolution may
>have shaped us so the narrative usually has reasonable correlation
>with external reality (allowing one to weave theories like this one
>where external reality plays the leading role), there are conditions
>(like dreams) where the correlation get arbitrarily poor, or is
>grossly inconsistent.

And this may be so:

>Things get really strange when one realizes that, using a suitable
>mapping (for instance a big lookup table), one can interpret
>essentially anything (eg. a counter, Putnam's rock, or Searle's wall)
>as implementing this kind of consciousness.  In such an
>interpretation, the consciousness is telling itself a story that
>constitutes a reality, whether or not we (outside the story) actually
>have the interpretive means at hand to detect the story or translate
>it into our own language.

However, this does nothing to show these things are actually
conscious in the same sense that we are.  If they're conscious
in some different sense, perhaps we needn't fight over who 
(me or Moravec, say) gets to have the word "conscious" for
their sense.

In any case, does anyone really want the defense of AI to
depend on a victory for Moravec's view of this matter?

-- jeff
