From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!bloom-beacon!eru!hagbard!sunic!news.funet.fi!fuug!mcsun!corton!ilog!davis Mon Dec  9 10:47:29 EST 1991
Article 1805 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1259 comp.ai.philosophy:1805
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!bloom-beacon!eru!hagbard!sunic!news.funet.fi!fuug!mcsun!corton!ilog!davis
>From: davis@passy.ilog.fr (Harley Davis)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <DAVIS.91Dec2185141@passy.ilog.fr>
Date: 2 Dec 91 17:51:41 GMT
References: <JMC.91Nov17135110@SAIL.Stanford.EDU> <1991Nov17.190935.5546@husc3.harvard.edu>
	<DAVIS.91Nov25065812@passy.ilog.fr>
	<1991Nov25.101026.5866@husc3.harvard.edu>
Sender: news@ilog.fr
Organization: ILOG S.A., Gentilly, France
Lines: 96
In-reply-to: zeleny@zariski.harvard.edu's message of 25 Nov 91 15:10:23 GMT


In article <1991Nov25.101026.5866@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

   (Harley Davis) writes:

   HD:
   >What really matters to me is the question of when we decide to treat
   >artificial creatures as moral agents. Here I think that even if your
   >intuition fights against the conclusion that a successfully imitative
   >robot is conscious, it is better to err on the conservative side, and
   >treat the robot as an agent.  Do you agree with this thesis -- or are
   >you so very certain that you are right?

   Allow me to offer a thought experiment.  I have before me a SIG AMT rifle
   in 7.62 NATO caliber, equipped with a bipod and a 20-round magazine.  My
   window affords me a good view of the Massachusetts Avenue.  Suppose Jesus
   came to me in my dream and told me to punish all those godless pinko
   liberal Cantabrigians with my mighty weapon.  Would you blame the rifle or
   me?

Actually, I would probably blame Jesus - it wouldn't be the first
trouble of this sort he's caused.

   Suppose now that, on hearing the Good News, I were to call my friend Lenny
   Rudin at Cognitech in Santa Monica, CA, and ask him for some custom image
   recognition software, constructing an elementary infrared tracking system
   for the rifle.  Suppose also that, just to make things more interesting, I
   would rig up a linkage between the tracking control and a device monitoring
   some random process, e.g. radioactive decay.  Given that I am no longer
   pulling the trigger, would you blame the device or me?

In long-hallowed Jewish tradition, let me answer your question with a
question.  If you believed in God, would you blame Him for the
Holocaust?  Less dramatically, should we blame my blunders on my
parents?

   MZ:
   >   So consider a machine that so resembles us in appearance and
   >   behavior, that it is undistinguishable from ourselves in both of
   >   these aspects.  However, at any given time, by assessing its
   >   construction, we may comprehend all causal factors that influence
   >   its behavior (to the extent that this is a machine constructed by
   >   ourselves, I assume that we can do so, retracing, if necessary, the
   >   modifications imposed on the initial configuration by the learning
   >   process).  Now, David Gudeman has argued recently to the effect
   >   that our ability to analyze the machine's behavior in this fashion
   >   would constitute prima facie evidence to the effect that such
   >   machine lacks consciousness. I'd like to support this view by
   >   noting that if you accept Colin McGinn's arguments that there
   >   exists some property of the brain that accounts naturalistically
   >   for consciousness, but we are cognitively closed with respect to
   >   that property, i.e. our concept-forming capabilities cannot extend
   >   to a grasp of that property (see "The Problem of Consciousness"),
   >   then we would be forced to admit that, operational success
   >   notwithstanding, the machine has to lack consciousness.  I only
   >   accept the consequent of McGinn's claim, denying the "naturalistic"
   >   part; however, the rest of his argument is sufficient to estabilish
   >   my conclusion.

   HD:
   >The machine could develop the property on its own, after we give it
   >the fundamentals necessary for its growth.

   Fine: let us grow with it.  Once again, I assume that at any given time we
   can comprehend all causal factors that influence the machine's behavior, by
   assessing its construction and retracing, if necessary, the modifications
   imposed on the initial configuration by the learning process.  Of course,
   all that is possible only to the extent that it is a deterministic device
   constructed by ourselves; by including a "non-deterministic" factor in its
   construction, as indicated above, we would effectively make its behavior
   unpredictable, if not really cognitively closed (since it is arguably true
   that our concept-forming capabilities can extend to a grasp of the physical
   nature of the ostensibly non-deterministic property in question).  Still,
   to argue that such non-determinism would account for the putative property
   of machine consciousness, would be tantamount to saying "then a miracle
   occurs" (see Dennett's book, pp.37--8.).

Do you really believe that there is some fundamental difference
between understanding of all the relevant causal factors in your
brain's development, and understanding those involved in the
development of a computer system?  Why do we need a miracle more
miraculous than feedback?  I just don't have time in my allotted three
score and ten years for all this infinite recursion.

A side point: As a former software professional, you know that even
now, without even pretending to have achieved AI, it is in practice
impossible to have complete knowledge of the causal factors
influencing a computer system.  This is due to exactly the sort of
environmental, non-deterministic factors you mention above.

-- Harley Davis
--
------------------------------------------------------------------------------
nom: Harley Davis			ILOG S.A.
net: davis@ilog.fr			2 Avenue Gallie'ni, BP 85
tel: (33 1) 46 63 66 66			94253 Gentilly Cedex, France


