From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!corton!ilog!davis Tue Nov 26 12:31:53 EST 1991
Article 1537 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1081 comp.ai.philosophy:1537
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!corton!ilog!davis
>From: davis@passy.ilog.fr (Harley Davis)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <DAVIS.91Nov24033509@passy.ilog.fr>
Date: 24 Nov 91 02:35:09 GMT
References: <JMC.91Nov17135110@SAIL.Stanford.EDU> <1991Nov17.190935.5546@husc3.harvard.edu>
	<DAVIS.91Nov19224009@passy.ilog.fr>
	<1991Nov23.024440.5800@husc3.harvard.edu>
Sender: news@ilog.fr
Organization: ILOG S.A., Gentilly, France
Lines: 49
In-reply-to: zeleny@zariski.harvard.edu's message of 23 Nov 91 07:44:38 GMT


In article <1991Nov23.024440.5800@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

   HD:
   >What makes you think that real human beings succeed in denoting,
   >according to your high standards for this esteemed relation?  Surely
   >you can't just say that *by definition* humans denote, thus
   >automatically excluding any non-humans from the privileged caste of
   >denoters a priori?  On the other hand, you don't want to say that we
   >empirically determine that we denote, because then you must admit that
   >a computer which passes the Turing Test also denotes.  So what makes
   >you so sure that denoting is crucial for intelligence?

   The inadequacy of the Turing test has been covered elsewhere by Jeff
   Dalton; accordingly, I'll limit myself to a straightforward answer.  The
   evidence for my claim that we succeed in denoting sensible objects must
   necessarily be empirical, as the relation of denoting is itself contingent
   and a posteriori.  On the other hand, I believe that our capability to
   reliably denote mathematical objects (substitute `forms' or `structures' if
   you are a follower of Bourbaki or Saunders MacLane, respectively; if you
   are a formalist, this discussion is quite pointless), or to reliably
   express meanings, is a priori, directly verifiable by introspection, and
   so, at least to me, much less open to doubt.

OK, let's accept the empirical view.  So if a computer shows empirical
evidence of denoting -- I suppose by consistently and correctly
refering to things by their names -- then we must also grant it this
ability.  Additionally, if the computer gives reasonable evidence of
introspection, we should also take it at its word.  Why shouldn't we?
(Even if we don't, the computer itself could, according to your view.)

I understand Jeff's arguments against the Turing Test.  However, a
computer passing a refined Total Turing Test would give greater and
greater corroborative evidence to the thesis that the computer is
conscious.  Do you disagree with this -- ie, do you really see a
_fundamental_ reason why a computer could not denote?  And if so, how
does this compare to the putative empirical evidence you rely on to
judge humans?

Frankly, to me it seems that once you accept the empirical view, you
must commit a "millenial error", as you say, not to accept the
possibility of artificial consciousness.

-- Harley Davis
--
------------------------------------------------------------------------------
nom: Harley Davis			ILOG S.A.
net: davis@ilog.fr			2 Avenue Gallie'ni, BP 85
tel: (33 1) 46 63 66 66			94253 Gentilly Cedex, France


