From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!mcsun!corton!ilog!davis Tue Nov 26 12:32:15 EST 1991
Article 1574 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1107 comp.ai.philosophy:1574
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!mcsun!corton!ilog!davis
>From: davis@passy.ilog.fr (Harley Davis)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <DAVIS.91Nov25065812@passy.ilog.fr>
Date: 25 Nov 91 05:58:12 GMT
References: <JMC.91Nov17135110@SAIL.Stanford.EDU> <1991Nov17.190935.5546@husc3.harvard.edu>
	<DAVIS.91Nov24033509@passy.ilog.fr>
	<1991Nov24.124945.5834@husc3.harvard.edu>
Sender: news@ilog.fr
Organization: ILOG S.A., Gentilly, France
Lines: 86
In-reply-to: zeleny@zariski.harvard.edu's message of 24 Nov 91 17:49:43 GMT


In article <1991Nov24.124945.5834@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

   HD:
   >I understand Jeff's arguments against the Turing Test.  However, a
   >computer passing a refined Total Turing Test would give greater and
   >greater corroborative evidence to the thesis that the computer is
   >conscious.  Do you disagree with this -- ie, do you really see a
   >_fundamental_ reason why a computer could not denote?  And if so, how
   >does this compare to the putative empirical evidence you rely on to
   >judge humans?

   Earlier in this thread I have given a lenghthy semantical argument to the
   effect that a FSA cannot, in principle, denote, even to the extent of
   operational success; as I expand and elaborate it, I'll continue to repost;
   however, at this time, you can refer to the earlier post, or to the summary
   in the Putnan thread.

Sorry, I posted before having completely read the remaining messages,
and there was your post.

In lieu of evidence that human beings do not embody FSA's, I don't
think you have proven your point: I consider it a very real
possibility that _nothing_ denotes successfully by your criteria.
However, since you seem to maintain that at the bottom you rely on
introspective intuition to justify your line of argument, which I
think is reasonable, I'm willing to "agree to disagree".

What really matters to me is the question of when we decide to treat
artificial creatures as moral agents. Here I think that even if your
intuition fights against the conclusion that a successfully imitative
robot is conscious, it is better to err on the conservative side, and
treat the robot as an agent.  Do you agree with this thesis -- or are
you so very certain that you are right?

   HD:
   >Frankly, to me it seems that once you accept the empirical view, you
   >must commit a "millenial error", as you say, not to accept the
   >possibility of artificial consciousness.

   Allow me to offer you a speculative reply to your statement.  The problem
   of ascribing consciousness to artificial entities is similar to the
   philosophical problem of other minds, which is, in a very fundamental
   sense, unsolvable (see Benson Mates' "Skeptical Essays").  So, in order to
   make it interesting, we must relax the criteria of ascribing intelligence
   to the point that they be satisfied by intuitive plausibility, rather than
   absolute certitude.  However, it now seems that as a minimum, we must also
   require that the putatively intelligent machine not only "do the right
   thing", but that it do it for the right reason, and in the right way.

This is all good.  Of course, even in the problem of other minds, we
don't know if the others do it the right way --- especially if it is
not, in fact, the brain which is doing the work.  Dualism is tough
alligator to wrestle with!

   So consider a machine that so resembles us in appearance and
   behavior, that it is undistinguishable from ourselves in both of
   these aspects.  However, at any given time, by assessing its
   construction, we may comprehend all causal factors that influence
   its behavior (to the extent that this is a machine constructed by
   ourselves, I assume that we can do so, retracing, if necessary, the
   modifications imposed on the initial configuration by the learning
   process).  Now, David Gudeman has argued recently to the effect
   that our ability to analyze the machine's behavior in this fashion
   would constitute prima facie evidence to the effect that such
   machine lacks consciousness. I'd like to support this view by
   noting that if you accept Colin McGinn's arguments that there
   exists some property of the brain that accounts naturalistically
   for consciousness, but we are cognitively closed with respect to
   that property, i.e. our concept-forming capabilities cannot extend
   to a grasp of that property (see "The Problem of Consciousness"),
   then we would be forced to admit that, operational success
   notwithstanding, the machine has to lack consciousness.  I only
   accept the consequent of McGinn's claim, denying the "naturalistic"
   part; however, the rest of his argument is sufficient to estabilish
   my conclusion.

The machine could develop the property on its own, after we give it
the fundamentals necessary for its growth.

-- Harley Davis
--
------------------------------------------------------------------------------
nom: Harley Davis			ILOG S.A.
net: davis@ilog.fr			2 Avenue Gallie'ni, BP 85
tel: (33 1) 46 63 66 66			94253 Gentilly Cedex, France


