From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!ukma!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Nov 19 11:09:48 EST 1991
Article 1285 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:938 comp.ai.philosophy:1285
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!ukma!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Is there any such thing as informal logic?
Message-ID: <1991Nov12.113003.5368@husc3.harvard.edu>
Date: 12 Nov 91 16:30:02 GMT
References: <1991Oct22.041210.5931@watserv1.waterloo.edu> <JMC.91Nov7214345@SAIL.Stanford.EDU>
Organization: Dada
Lines: 52
Nntp-Posting-Host: zariski.harvard.edu

In article <JMC.91Nov7214345@SAIL.Stanford.EDU> 
jmc@SAIL.Stanford.EDU (John McCarthy) writes:

JMC:
>Much of Zeleny's post is obscure to me, but this much I understand
>and disagree with

MZ:
>     ... the success of our reference to any entity, whether
>     intensional or extensional, depends on our grasp of its
>     concept, which in turn depends on our grasp on the concept
>     of its concept, and so on.

JMC:
>Howwever, my disagreement may depend on an AI notion of success of
>reference.  I would consider a robot to refer successfully to
>chairs if it gets them when asked decides chairness of objects
>in agreement with humans in those cases when the humans agree
>with each other.  This doesn't require concepts of concepts,
>although some other uses do.

I hope that you would agree with me that the operational success of any
implementation of your favorite theory of reference will depend on its
theoretical adequacy.  It's well known that classical model-theoretic
semantics is incapable of fully characterizing reference; hence it is
incapable of sufficiently constraining any derived operational criteria
that purport to implement what you call the ``AI notion of success of
reference''.  (See e.g. an overview in Lakoff's ``Women, Fire, and
Dangerous Things'', chapter 15.)  Now, the alternative to model-theoretic
semantics that I am advocating above (the Frege-Church semantics) doesn't
seem to lend itself to an implementation, or even a representation, in
finite-state automata.  Please note that the burden of providing a finitely
representable semantical theory capable of fixing the operational criteria
of reference lies on AI researchers like you.

N.B.  The part of my article you found obscure formulated a parallel
argument for the semantics of arithmetic.  I'll elaborate on this anon.

'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


