Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Strong AI and consciousness
Message-ID: <jqbD0D5F0.6r@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3b0n0h$ite@news1.shell> <3b11sh$hod@cantaloupe.srv.cs.cmu.edu> <CzsBo6.2xH@cogsci.ed.ac.uk>
Date: Tue, 6 Dec 1994 00:30:35 GMT
Lines: 82

In article <CzsBo6.2xH@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <3b11sh$hod@cantaloupe.srv.cs.cmu.edu> hpm@cs.cmu.edu writes:
>>
>>>The problem is this:
>>>
>>>A) Whether a machine is running a certain program is a subjective
>>>   judgement.  There is no right or wrong in the matter.  It depends
>>>   on how you look at it, how you interpret what is happening.
>>>
>>>B) A machine running the proper program becomes conscious.  (This is
>>>   the strong AI principle.)
>>>
>>>C) Whether something is conscious or not is not a subjective matter.
>>>   We all know from personal experience that there is no room for
>>>   doubt about our own consciousness.  This is a question where there
>>>   is a right answer and a wrong answer.  Bill Clinton is conscious,
>>>   and anyone who denies it is wrong.
>>>
>>>Now, I believe, to a considerable degree, all three of these statements.
>>>Yet they seem to contradict each other.  This poses a dilemma for me.
>>>Do other people feel this way?
>>
>>Well stated.
>>
>>I agree with A and B.
>>
>>Like Neil, I disagree with C.  Consciousness, like beauty, is a purely
>>subjective interpretation put on a process.  A highly intelligent
>>alien might well prefer to interpret you as a complicated windup toy,
>>especially if it was so psychologically different from you that it had
>>no referents for your normal mental experiences.
>
>What aliens might prefer does not determine what it the case.

Are you saying that it is a matter of fact as to whether you are a complicated
windup toy, and so the aliens could be wrong?  I suppose so, given a literal
interpretation of "windup toy".  But if aliens interpreted you as a complicated
machine, how could you dispute it?  I suppose you could if you were careful
never to define "machine", and carry on a debate for years, always with the
implicit assumption that there is a matter of fact as to what a machine is
and that anyone who uses the word must be referring to that matter of fact.

We can clarify the problem with "machine" by objectively defining them in terms
of algorithms or Turing Machines.  Ok, perhaps we can do something similar for
consciousness.  Let's see, I have a hazy notion about it that involves
complicated self-interpreting analytical processes that produce what I might
hazily call a "conceptual field", with tokens that fit what we call "qualia".
Since the self-referential interpretion is in terms of the tokens rather than
the mechanisms that produce the tokens, the tokens are interpreted as being
real in and of themselves.  It may be possible to understand the relationship
between the qualia tokens and the underlying mechanisms by studying the
mechanisms themselves, but this is at a different level of description
than the tokens, the qualia, themselves, and thus the qualia can never have
a mechanistic character themselves.  They are interpreted as ("seem")
"direct" or "perceived", and don't "fit" into the interpretation of mechanisms.

Now, I think my hazy notion provides a framework in which to see human
mechanisms as being conscious, to see that consciousness is not just an
"unnecessary artifact", to see that something that can pass a properly probing
Turing Test might reasonably be cassumed to have the proper sort of
self-interpretion to qualify as being conscious by this hazy "definition".
But I suspect that my definition is not universally accepted.  In fact some
will argue that it is too vague and hazy to be a definition at all.  I include
myself among them.  But it's the best I can do at the moment.  Perhaps, Jeff,
you can do better?  Perhaps you can provide a sufficiently detailed definition
of consciousness such that it can *mean* anything for it to be a matter of
fact whether something is conscious.  And if you say "Like I am", I will
complain that I don't understand precisely *what* you mean by that.

>However, this does nothing to show these things are actually
>conscious in the same sense that we are.

What *is* that sense, Jeff?  Describe it so that we know what you mean.

>In any case, does anyone really want the defense of AI to
>depend on a victory for Moravec's view of this matter?

What is "the defense of AI"?

-- 
<J Q B>
