From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!zeleny Mon Dec  9 10:48:12 EST 1991
Article 1880 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1305 comp.ai.philosophy:1880
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <1991Dec5.125437.6179@husc3.harvard.edu>
Date: 5 Dec 91 17:54:34 GMT
References: <JMC.91Nov17135110@SAIL.Stanford.EDU> <1991Nov17.190935.5546@husc3.harvard.edu> <DAVIS.91Dec2185141@passy.ilog.fr>
Organization: Dept. of Math, Harvard Univ.
Lines: 133
Nntp-Posting-Host: zariski.harvard.edu

In article <DAVIS.91Dec2185141@passy.ilog.fr> 
davis@passy.ilog.fr (Harley Davis) writes:

>In article <1991Nov25.101026.5866@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:


HD:
>>>What really matters to me is the question of when we decide to treat
>>>artificial creatures as moral agents. Here I think that even if your
>>>intuition fights against the conclusion that a successfully imitative
>>>robot is conscious, it is better to err on the conservative side, and
>>>treat the robot as an agent.  Do you agree with this thesis -- or are
>>>you so very certain that you are right?

MZ:
>>Allow me to offer a thought experiment.  I have before me a SIG AMT rifle
>>in 7.62 NATO caliber, equipped with a bipod and a 20-round magazine.  My
>>window affords me a good view of the Massachusetts Avenue.  Suppose Jesus
>>came to me in my dream and told me to punish all those godless pinko
>>liberal Cantabrigians with my mighty weapon.  Would you blame the rifle or
>>me?

HD:
>Actually, I would probably blame Jesus - it wouldn't be the first
>trouble of this sort he's caused.

Cute, but ineffective.  Do you believe yourself capable of acting freely? 
More to the point, do you believe me capable of disobeying vox Dei?

MZ:
>>Suppose now that, on hearing the Good News, I were to call my friend Lenny
>>Rudin at Cognitech in Santa Monica, CA, and ask him for some custom image
>>recognition software, constructing an elementary infrared tracking system
>>for the rifle.  Suppose also that, just to make things more interesting, I
>>would rig up a linkage between the tracking control and a device monitoring
>>some random process, e.g. radioactive decay.  Given that I am no longer
>>pulling the trigger, would you blame the device or me?

HD:
>In long-hallowed Jewish tradition, let me answer your question with a
>question.  If you believed in God, would you blame Him for the
>Holocaust?  Less dramatically, should we blame my blunders on my
>parents?

Negative on the second question, affirmative on the first.  Your
parents may me benevolent allright, but I doubt both their omniscience
and omnipotence.

MZ:
>>>>So consider a machine that so resembles us in appearance and
>>>>behavior, that it is undistinguishable from ourselves in both of
>>>>these aspects.  However, at any given time, by assessing its
>>>>construction, we may comprehend all causal factors that influence
>>>>its behavior (to the extent that this is a machine constructed by
>>>>ourselves, I assume that we can do so, retracing, if necessary, the
>>>>modifications imposed on the initial configuration by the learning
>>>>process).  Now, David Gudeman has argued recently to the effect
>>>>that our ability to analyze the machine's behavior in this fashion
>>>>would constitute prima facie evidence to the effect that such
>>>>machine lacks consciousness. I'd like to support this view by
>>>>noting that if you accept Colin McGinn's arguments that there
>>>>exists some property of the brain that accounts naturalistically
>>>>for consciousness, but we are cognitively closed with respect to
>>>>that property, i.e. our concept-forming capabilities cannot extend
>>>>to a grasp of that property (see "The Problem of Consciousness"),
>>>>then we would be forced to admit that, operational success
>>>>notwithstanding, the machine has to lack consciousness.  I only
>>>>accept the consequent of McGinn's claim, denying the "naturalistic"
>>>>part; however, the rest of his argument is sufficient to estabilish
>>>>my conclusion.

HD:
>>>The machine could develop the property on its own, after we give it
>>>the fundamentals necessary for its growth.

MZ:
>>Fine: let us grow with it.  Once again, I assume that at any given time we
>>can comprehend all causal factors that influence the machine's behavior, by
>>assessing its construction and retracing, if necessary, the modifications
>>imposed on the initial configuration by the learning process.  Of course,
>>all that is possible only to the extent that it is a deterministic device
>>constructed by ourselves; by including a "non-deterministic" factor in its
>>construction, as indicated above, we would effectively make its behavior
>>unpredictable, if not really cognitively closed (since it is arguably true
>>that our concept-forming capabilities can extend to a grasp of the physical
>>nature of the ostensibly non-deterministic property in question).  Still,
>>to argue that such non-determinism would account for the putative property
>>of machine consciousness, would be tantamount to saying "then a miracle
>>occurs" (see Dennett's book, pp.37--8.).

HD:
>Do you really believe that there is some fundamental difference
>between understanding of all the relevant causal factors in your
>brain's development, and understanding those involved in the
>development of a computer system?  Why do we need a miracle more
>miraculous than feedback?  I just don't have time in my allotted three
>score and ten years for all this infinite recursion.

To reiterate: I don't believe myself to be capable, even in principle,
of understanding all the relevant causal factors in my mind's (never
mind the brain) development; on the other hand, I believe that
understanding the factors involved in the development of a
deterministic computer system is, in principle, always possible.  

Please note that if you object to the abstraction of potential
realizability implicit in my use of the term `in principle', it is
incumbent upon you to reject classical mathematics in favor of
ultra-intuitionism.

HD:
>A side point: As a former software professional, you know that even
>now, without even pretending to have achieved AI, it is in practice
>impossible to have complete knowledge of the causal factors
>influencing a computer system.  This is due to exactly the sort of
>environmental, non-deterministic factors you mention above.

Correction: I still practice my profession; however, as noted above,
this discussion has very little to do with practical limitations.

>-- Harley Davis

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


