From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!spool.mu.edu!uwm.edu!linac!att!rutgers!hsdndev!husc-news.harvard.edu!brauer!zeleny Tue Nov 19 11:10:33 EST 1991
Article 1362 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10012 sci.philosophy.tech:972 comp.ai.philosophy:1362
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!spool.mu.edu!uwm.edu!linac!att!rutgers!hsdndev!husc-news.harvard.edu!brauer!zeleny
>From: zeleny@brauer.harvard.edu (Mikhail Zeleny)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <1991Nov17.190935.5546@husc3.harvard.edu>
Date: 18 Nov 91 00:09:33 GMT
References: <1991Nov15 .003438.11323@grebyn.com> <1991Nov15.160741.5495@husc3.harvard.edu> <JMC.91Nov17135110@SAIL.Stanford.EDU>
Organization: Dept. of Math, Harvard Univ.
Lines: 83
Nntp-Posting-Host: brauer.harvard.edu

In article <JMC.91Nov17135110@SAIL.Stanford.EDU> jmc@cs.Stanford.EDU writes:

>Zeleny says that I have failed to come up with an adequate account of the
>distinction between expressing and denoting.  I haven't tried yet, mainly
>because I don't know what distinction Zeleny is referring to.  Maybe it
>is well known, and I don't know of it because of limited acquaintance with
>philosophy.  

Your understanding of my words might be improved if you cited them instead
of attempting paraphrase.  I said that any AI researcher stands in need of
an adequate semantical theory that would characterize the relevant
relations of expressing and denoting, and could be implemented by a finite
state automaton, and that, so far, you have failed to come up with an
answer to this challenge.  My question has its origin in your claim of
possessing what you called "an AI notion of success of reference", to which
I answered as follows.

I hope that you would agree with me that the operational success of any
implementation of your favorite theory of reference will depend on its
theoretical adequacy.  It's well known that classical model-theoretic
semantics is incapable of fully characterizing reference; hence it is
incapable of sufficiently constraining any derived operational criteria
that purport to implement what you call the ``AI notion of success of
reference''.  (See e.g. an overview in Lakoff's ``Women, Fire, and
Dangerous Things'', chapter 15.)  Now, the alternative to model-theoretic
semantics that I am advocating above (the Frege-Church semantics) doesn't
seem to lend itself to an implementation, or even a representation, in
finite-state automata.  Please note that the burden of providing a finitely
representable semantical theory capable of fixing the operational criteria
of reference lies on AI researchers like you.

Now that I've reiterated my statements for your benefit, perhaps you could
address them directly, instead of veering off into irrelevant matters.

JMC:
>             Anyway he should tell us what it it is. 

With great pleasure.  Proper names express their senses, which function as
their cognitive contents, and denote those objects (if any) that they are
names of; the distinction is that of intension and extension.

JMC:
>                                                     However, I don't
>promise to try, because it may not turn out to be suitable for work at
>this time.  

I would never presume to challenge a busy person like yourself to
substantiate someone else's theories; in this case my request is much
humbler, as well as more relevant to your own pursuits: kindly outline a
foundation for your own theories of the "AI notion of success of
reference", or forever abstain from making claims that a robot can refer.

JMC:
>           Even if I tried, and everyone agreed I had failed, Zeleny
>still would need another argument that Dennett is a charlatan.  (Zeleny
>says "ignoramus or charlatan", but clearly he should exclude the former
>since all the evidence is that Dennett is much smarter than Zeleny.

Please elaborate: any claim of linear ordering of human faculties holds an
irresistible fascination for me.  Are you perchance basing your judgment on
the demonstrated ability to vend or vaunt one's cognitive capacity?

>--
>John McCarthy, Computer Science Department, Stanford, CA 94305
>*
>He who refuses to do arithmetic is doomed to talk nonsense.

Have your AI programs learned to do arithmetic yet?

'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


