From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!snorkelwacker.mit.edu!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Nov 26 12:31:58 EST 1991
Article 1546 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1085 comp.ai.philosophy:1546
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!snorkelwacker.mit.edu!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <1991Nov24.124945.5834@husc3.harvard.edu>
Date: 24 Nov 91 17:49:43 GMT
References: <JMC.91Nov17135110@SAIL.Stanford.EDU> <1991Nov17.190935.5546@husc3.harvard.edu> <DAVIS.91Nov24033509@passy.ilog.fr>
Organization: Dept. of Math, Harvard Univ.
Lines: 99
Nntp-Posting-Host: zariski.harvard.edu

In article <DAVIS.91Nov24033509@passy.ilog.fr> 
davis@passy.ilog.fr (Harley Davis) writes:

>In article <1991Nov23.024440.5800@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

HD:
>   >What makes you think that real human beings succeed in denoting,
>   >according to your high standards for this esteemed relation?  Surely
>   >you can't just say that *by definition* humans denote, thus
>   >automatically excluding any non-humans from the privileged caste of
>   >denoters a priori?  On the other hand, you don't want to say that we
>   >empirically determine that we denote, because then you must admit that
>   >a computer which passes the Turing Test also denotes.  So what makes
>   >you so sure that denoting is crucial for intelligence?

MZ:
>   The inadequacy of the Turing test has been covered elsewhere by Jeff
>   Dalton; accordingly, I'll limit myself to a straightforward answer.  The
>   evidence for my claim that we succeed in denoting sensible objects must
>   necessarily be empirical, as the relation of denoting is itself contingent
>   and a posteriori.  On the other hand, I believe that our capability to
>   reliably denote mathematical objects (substitute `forms' or `structures' if
>   you are a follower of Bourbaki or Saunders MacLane, respectively; if you
>   are a formalist, this discussion is quite pointless), or to reliably
>   express meanings, is a priori, directly verifiable by introspection, and
>   so, at least to me, much less open to doubt.

HD:
>OK, let's accept the empirical view.  So if a computer shows empirical
>evidence of denoting -- I suppose by consistently and correctly
>refering to things by their names -- then we must also grant it this
>ability.  Additionally, if the computer gives reasonable evidence of
>introspection, we should also take it at its word.  Why shouldn't we?
>(Even if we don't, the computer itself could, according to your view.)

Assuming that it had a subjective viewpoint, it could do so; alas, such
assumption would beg the question of consciousness pace Nagel ("What is il
like to be a bat?"); note that Dennett has specifically rallied against the
first-person perspective in his latest book, pp.71ff,441ff, so this move is
not available to him.  However, I'm prepared to grant you the operational
success criterion of reference, which I generally consider to be
insufficient, for the sake of argument.

HD:
>I understand Jeff's arguments against the Turing Test.  However, a
>computer passing a refined Total Turing Test would give greater and
>greater corroborative evidence to the thesis that the computer is
>conscious.  Do you disagree with this -- ie, do you really see a
>_fundamental_ reason why a computer could not denote?  And if so, how
>does this compare to the putative empirical evidence you rely on to
>judge humans?

Earlier in this thread I have given a lenghthy semantical argument to the
effect that a FSA cannot, in principle, denote, even to the extent of
operational success; as I expand and elaborate it, I'll continue to repost;
however, at this time, you can refer to the earlier post, or to the summary
in the Putnan thread.

HD:
>Frankly, to me it seems that once you accept the empirical view, you
>must commit a "millenial error", as you say, not to accept the
>possibility of artificial consciousness.

Allow me to offer you a speculative reply to your statement.  The problem
of ascribing consciousness to artificial entities is similar to the
philosophical problem of other minds, which is, in a very fundamental
sense, unsolvable (see Benson Mates' "Skeptical Essays").  So, in order to
make it interesting, we must relax the criteria of ascribing intelligence
to the point that they be satisfied by intuitive plausibility, rather than
absolute certitude.  However, it now seems that as a minimum, we must also
require that the putatively intelligent machine not only "do the right
thing", but that it do it for the right reason, and in the right way.

So consider a machine that so resembles us in appearance and behavior, that
it is undistinguishable from ourselves in both of these aspects.  However,
at any given time, by assessing its construction, we may comprehend all
causal factors that influence its behavior (to the extent that this is a
machine constructed by ourselves, I assume that we can do so, retracing, if
necessary, the modifications imposed on the initial configuration by the
learning process).  Now, David Gudeman has argued recently to the effect
that our ability to analyze the machine's behavior in this fashion would
constitute prima facie evidence to the effect that such machine lacks
consciousness.  I'd like to support this view by noting that if you accept
Colin McGinn's arguments that there exists some property of the brain that
accounts naturalistically for consciousness, but we are cognitively closed
with respect to that property, i.e. our concept-forming capabilities cannot
extend to a grasp of that property (see "The Problem of Consciousness"),
then we would be forced to admit that, operational success notwithstanding,
the machine has to lack consciousness.  I only accept the consequent of
McGinn's claim, denying the "naturalistic" part; however, the rest of his
argument is sufficient to estabilish my conclusion.

>-- Harley Davis
>--
>------------------------------------------------------------------------------
>nom: Harley Davis			ILOG S.A.
>net: davis@ilog.fr			2 Avenue Gallie'ni, BP 85
>tel: (33 1) 46 63 66 66			94253 Gentilly Cedex, France


