Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!fs7.ece.cmu.edu!hudson.lm.com!godot.cc.duq.edu!newsfeed.pitt.edu!gatech!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Strong AI and consciousness
Message-ID: <D0EnCG.449@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3b11sh$hod@cantaloupe.srv.cs.cmu.edu> <CzsBo6.2xH@cogsci.ed.ac.uk> <jqbD0D5F0.6r@netcom.com>
Date: Tue, 6 Dec 1994 19:55:27 GMT
Lines: 149

In article <jqbD0D5F0.6r@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <CzsBo6.2xH@cogsci.ed.ac.uk>,
>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>In article <3b11sh$hod@cantaloupe.srv.cs.cmu.edu> hpm@cs.cmu.edu writes:
>>>
>>>>The problem is this:
>>>>
>>>>A) Whether a machine is running a certain program is a subjective
>>>>   judgement.  There is no right or wrong in the matter.  It depends
>>>>   on how you look at it, how you interpret what is happening.
>>>>
>>>>B) A machine running the proper program becomes conscious.  (This is
>>>>   the strong AI principle.)
>>>>
>>>>C) Whether something is conscious or not is not a subjective matter.
>>>>   We all know from personal experience that there is no room for
>>>>   doubt about our own consciousness.  This is a question where there
>>>>   is a right answer and a wrong answer.  Bill Clinton is conscious,
>>>>   and anyone who denies it is wrong.
>>>>
>>>>Now, I believe, to a considerable degree, all three of these statements.
>>>>Yet they seem to contradict each other.  This poses a dilemma for me.
>>>>Do other people feel this way?
>>>
>>>Well stated.
>>>
>>>I agree with A and B.
>>>
>>>Like Neil, I disagree with C.  Consciousness, like beauty, is a purely
>>>subjective interpretation put on a process.  A highly intelligent
>>>alien might well prefer to interpret you as a complicated windup toy,
>>>especially if it was so psychologically different from you that it had
>>>no referents for your normal mental experiences.
>>
>>What aliens might prefer does not determine what it the case.
>
>Are you saying that it is a matter of fact as to whether you are a complicated
>windup toy, and so the aliens could be wrong? 

I'm saying what aliens prefer does not determine what is the case.

Whether there's a fact as to whether I'm a windup toy depends on
the meaning of "windup toy", but not on what interpretations aliens
prefer.

>I suppose so, given a literal
>interpretation of "windup toy".  But if aliens interpreted you as a 
>complicated machine, how could you dispute it?

Perhaps I wouldn't.  It depends on what they mean by "machine".

>  I suppose you could if you were careful
>never to define "machine", and carry on a debate for years, always with the
>implicit assumption that there is a matter of fact as to what a machine is
>and that anyone who uses the word must be referring to that matter of fact.

I haven't said there's a metter of fact as to what a machine is,
but, ok, I'll say it now: there are facts about English as to what
the word "machine" means.

>We can clarify the problem with "machine" by objectively defining
>them in terms of algorithms or Turing Machines. 

A rather odd definition, since machine includes circular saws
and automobiles which are not -- I would have thought -- naturally
characterized in terms of algorithms or TMs.

> Ok, perhaps we can do something similar for
>consciousness.  Let's see, I have a hazy notion about it that involves
>complicated self-interpreting analytical processes that produce what I might
>hazily call a "conceptual field", with tokens that fit what we call "qualia".
>Since the self-referential interpretion is in terms of the tokens rather than
>the mechanisms that produce the tokens, the tokens are interpreted as being
>real in and of themselves.  It may be possible to understand the relationship
>between the qualia tokens and the underlying mechanisms by studying the
>mechanisms themselves, but this is at a different level of description
>than the tokens, the qualia, themselves, and thus the qualia can never have
>a mechanistic character themselves.  They are interpreted as ("seem")
>"direct" or "perceived", and don't "fit" into the interpretation of
>mechanisms.

That sounds like a reasonable beginning to me.

>Now, I think my hazy notion provides a framework in which to see human
>mechanisms as being conscious, to see that consciousness is not just an
>"unnecessary artifact", to see that something that can pass a properly probing
>Turing Test might reasonably be cassumed to have the proper sort of
>self-interpretion to qualify as being conscious by this hazy "definition".

Ok.

>But I suspect that my definition is not universally accepted.  In fact some
>will argue that it is too vague and hazy to be a definition at all.  I include
>myself among them.  But it's the best I can do at the moment.  Perhaps, Jeff,
>you can do better? 

Not at the moment.

> Perhaps you can provide a sufficiently detailed definition
>of consciousness such that it can *mean* anything for it to be a matter of
>fact whether something is conscious.  And if you say "Like I am", I will
>complain that I don't understand precisely *what* you mean by that.

One test of a definition of consciousness is that it has humans as
conscious when they're up and about and behaving in the usual ways.

You can, of course, say "by `conscious' I mean X" for whatever X
you like, but if it has it that humans aren't conscious then we're
talking about different things.

>>However, this does nothing to show these things are actually
>>conscious in the same sense that we are.
>
>What *is* that sense, Jeff?  Describe it so that we know what you mean.

I do not believe that you do now already know roughly what
consciousness amounts to.  Indeed, your definition above shows
that much.  What I have in mind is subjective experience.
You can determine much of its character yourself, by introspection.
I'm not sure "consciousness" can be given a definition of the 
sort that would satisfy a determined critic.  For instance, you
might demand an operational definition that I couldn't supply
without doing the next 100 years of research in AI, neurophysiology,
etc (if then).

Indeed, you are so hostile to everything I say that I cannot see
the point in trying to get you to say you know what I mean.  You're
going to attack and ridicule what I say in any case, so I may as
well have you attacking me for "refusing to define consciousness"
as anything else.  It will also save time.

I note BTW, that many people, such as Dennett, manage to write
papers and longer works in which they do not define consciousness
without much by way of complaint that it's not clear what they
mean.

(I agree w/ Aaron Sloman that "consciousness" has and has had
a number of different meanings, BTW.)

>>In any case, does anyone really want the defense of AI to
>>depend on a victory for Moravec's view of this matter?
>
>What is "the defense of AI"?

Is that supposed to me a mystery?  It refers to arguments in support
of the possibility of AI and against critics of AI such as Searle,
Penrose, and Dreyfus.

-- jeff
