From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!cs.utexas.edu!swrinde!gatech!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Nov 26 12:32:01 EST 1991
Article 1551 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1087 comp.ai.philosophy:1551
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!cs.utexas.edu!swrinde!gatech!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <1991Nov25.101026.5866@husc3.harvard.edu>
Date: 25 Nov 91 15:10:23 GMT
References: <JMC.91Nov17135110@SAIL.Stanford.EDU> <1991Nov17.190935.5546@husc3.harvard.edu> <DAVIS.91Nov25065812@passy.ilog.fr>
Organization: Dept. of Math, Harvard Univ.
Lines: 128
Nntp-Posting-Host: zariski.harvard.edu

In article <DAVIS.91Nov25065812@passy.ilog.fr> 
davis@passy.ilog.fr (Harley Davis) writes:

>In article <1991Nov24.124945.5834@husc3.harvard.edu> 
zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

HD:
>In lieu of evidence that human beings do not embody FSA's, I don't
>think you have proven your point: I consider it a very real
>possibility that _nothing_ denotes successfully by your criteria.
>However, since you seem to maintain that at the bottom you rely on
>introspective intuition to justify your line of argument, which I
>think is reasonable, I'm willing to "agree to disagree".

I don't understand what it would mean for the human beings to "embody"
FSA's; my argument has been made to defend the view that FSA's cannot, in
principle, model human linguistic performance.  As for the evidence of
successful human denoting, I would like to direct your attention towards an
enormous body of empirical data collected by descriptive linguists.  On the
orter hand, being that the first-person privilege is expressly denied by AI
theorists, my claims of introspective evidence may not carry much weight in
their circles.

HD:
>What really matters to me is the question of when we decide to treat
>artificial creatures as moral agents. Here I think that even if your
>intuition fights against the conclusion that a successfully imitative
>robot is conscious, it is better to err on the conservative side, and
>treat the robot as an agent.  Do you agree with this thesis -- or are
>you so very certain that you are right?

Allow me to offer a thought experiment.  I have before me a SIG AMT rifle
in 7.62 NATO caliber, equipped with a bipod and a 20-round magazine.  My
window affords me a good view of the Massachusetts Avenue.  Suppose Jesus
came to me in my dream and told me to punish all those godless pinko
liberal Cantabrigians with my mighty weapon.  Would you blame the rifle or
me? 

Suppose now that, on hearing the Good News, I were to call my friend Lenny
Rudin at Cognitech in Santa Monica, CA, and ask him for some custom image
recognition software, constructing an elementary infrared tracking system
for the rifle.  Suppose also that, just to make things more interesting, I
would rig up a linkage between the tracking control and a device monitoring
some random process, e.g. radioactive decay.  Given that I am no longer
pulling the trigger, would you blame the device or me? 

HD:
>   >Frankly, to me it seems that once you accept the empirical view, you
>   >must commit a "millenial error", as you say, not to accept the
>   >possibility of artificial consciousness.

MZ:
>   Allow me to offer you a speculative reply to your statement.  The problem
>   of ascribing consciousness to artificial entities is similar to the
>   philosophical problem of other minds, which is, in a very fundamental
>   sense, unsolvable (see Benson Mates' "Skeptical Essays").  So, in order to
>   make it interesting, we must relax the criteria of ascribing intelligence
>   to the point that they be satisfied by intuitive plausibility, rather than
>   absolute certitude.  However, it now seems that as a minimum, we must also
>   require that the putatively intelligent machine not only "do the right
>   thing", but that it do it for the right reason, and in the right way.

HD:
>This is all good.  Of course, even in the problem of other minds, we
>don't know if the others do it the right way --- especially if it is
>not, in fact, the brain which is doing the work.  Dualism is tough
>alligator to wrestle with!

Let's just agree for now to call the way we imagine the others to do it,
the right way.

MZ:
>   So consider a machine that so resembles us in appearance and
>   behavior, that it is undistinguishable from ourselves in both of
>   these aspects.  However, at any given time, by assessing its
>   construction, we may comprehend all causal factors that influence
>   its behavior (to the extent that this is a machine constructed by
>   ourselves, I assume that we can do so, retracing, if necessary, the
>   modifications imposed on the initial configuration by the learning
>   process).  Now, David Gudeman has argued recently to the effect
>   that our ability to analyze the machine's behavior in this fashion
>   would constitute prima facie evidence to the effect that such
>   machine lacks consciousness. I'd like to support this view by
>   noting that if you accept Colin McGinn's arguments that there
>   exists some property of the brain that accounts naturalistically
>   for consciousness, but we are cognitively closed with respect to
>   that property, i.e. our concept-forming capabilities cannot extend
>   to a grasp of that property (see "The Problem of Consciousness"),
>   then we would be forced to admit that, operational success
>   notwithstanding, the machine has to lack consciousness.  I only
>   accept the consequent of McGinn's claim, denying the "naturalistic"
>   part; however, the rest of his argument is sufficient to estabilish
>   my conclusion.

HD:
>The machine could develop the property on its own, after we give it
>the fundamentals necessary for its growth.

Fine: let us grow with it.  Once again, I assume that at any given time we
can comprehend all causal factors that influence the machine's behavior, by
assessing its construction and retracing, if necessary, the modifications
imposed on the initial configuration by the learning process.  Of course,
all that is possible only to the extent that it is a deterministic device
constructed by ourselves; by including a "non-deterministic" factor in its
construction, as indicated above, we would effectively make its behavior
unpredictable, if not really cognitively closed (since it is arguably true
that our concept-forming capabilities can extend to a grasp of the physical
nature of the ostensibly non-deterministic property in question).  Still,
to argue that such non-determinism would account for the putative property
of machine consciousness, would be tantamount to saying "then a miracle
occurs" (see Dennett's book, pp.37--8.).

>-- Harley Davis

'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


