From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!think.com!spdcc!das-news.harvard.edu!husc-news.harvard.edu!zariski!zeleny Tue Feb 11 15:25:55 EST 1992
Article 3601 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3601 sci.philosophy.tech:2099
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!think.com!spdcc!das-news.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Is understanding algorithmic?
Message-ID: <1992Feb9.053036.8640@husc3.harvard.edu>
Date: 9 Feb 92 10:30:32 GMT
References: <6523@pkmab.se> <1992Jan28.122457.8161@husc3.harvard.edu> <6537@pkmab.se>
Organization: Dept. of Math, Harvard Univ.
Lines: 117
Nntp-Posting-Host: zariski.harvard.edu

Sorry I'm running late with substantive replies: the chickens have all come
home to roost, and a lot of my present writing has to be done for academic
credit.

In article <6537@pkmab.se> 
ske@pkmab.se (Kristoffer Eriksson) writes:

>In article <1992Jan28.122457.8161@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>>This is merely an assertion, not the application of a proof that Mark
>>>Corscadden asked for.
>>
>> Well, here goes.

KE:
>Than you. I have not been following this thread earlier.

MZ:
>> *aliquid stat pro aliquo*.

KE:
>I don't know latin. English took the place of latin long before I started
>school.

Well, my latin is rather minimal, but there are some clich\'es one
traditionally states in the original language... try doing like I do, and
look them up.

MZ:
>>on one hand ... purely denotative signs ... one would be forced to postulate
>>a causal relation in virtue of which these signs denote, stipulating that
>>this causal relation is itself entirely immanent in nervous activity, in
>>direct contradiction to the fact that our language, allegedly founded
>>solely on such nervous activity, has no trouble referring to objects and
>>phenomena that occur outside of the latter.
>
>>On the other hand ... connotative signs, ... Now, as I have argued elsewhere
>>on the Putnam thread, it's well known that intensions, once admitted, bring
>>in a transfinite hierarchy thereof; in other words, on the connotative theory,
>>reference depends on the grasp of (and, under the reductive materialist
>>assumption, physical embodiment of) meanings, which depend on meanings of
>>meanings, which in turn depend on meanings of meanings of meanings, and so
>>on.

KE:
>You consider two ways of viewing signs, and for each one you conclude that
>it doesn't work out. It appears to me that you have not ruled out one simple
>solution that immediately suggested itself to me: use them both! They seem
>to complement each other, such that each of them takes care of the other ones
>problems, at least the ones you presented.

Why would this change anything?  Your suggestion is somewhat similar in
spirit to the Russellian method of eliminating descriptions and "apparent
names" interpreted as abbreviations for clusters thereof (e.g. `Sir Walter
Scott' --> the author of "Waverley" + an acquaintance of George IV + ...),
but treating "logically proper names" as denoting directly, without the
mediation of a descriptive content (strictly speaking, his is not a
connotative theory, as the apparent connotation is explained away as mere
ellipsis).  Still, there remains the problem of just how the logically
proper names are supposed to denote.

KE:
>Suppose the meaning of connotative signs eventually terminate in denotative
>signs, which in turn have a causal relation with attributes of the external
>world through our senses. Through our senses, we can have denotative signs
>refering to simple attributes of the external world, like colors, shapes,
>sounds, touch and so on, which are well suited to build connotative signs
>on (thereby solving the problems of connotative signs), that can, in essence,
>use these denotative signs to describe any imaginable kind of referent whether
>ever directly experienced or not (thereby solving the problems of denotative
>signs).

Well, this is, in effect, what Russell said: logically proper names can
only denote such entities as we are directly acquainted with, i.e.
sense-data, universals, and our own selves.  Trouble is, even if you accept
this view, how would my ability to name the yellow spot in my field of
vision explain my apparent ability to refer to an external object causing
that spot?  Keep in mind the unlimited variety of phenomenal impressions a
given object may produce...

MZ:
>>... On the assumption that the brain, like a computer, is a finite state
>>automaton, this amounts to a reductio ad absurdum of materialist semantics.

KE:
>I assume this is meant to say that a finite state automaton can not possibly
>refer, and that the brain is not a finite state automaton, and that this is
>the conclusion that proves that the machines in Mark's example do not refer?

Indeed that is my claim.

Please don't misconstrue my position as being anti-AI.  In fact, I feel
that the AI thesis, at least as formulated by David Chalmers, is a
perfectly good working hypothesis in the philosophy of mind.  However, I
believe that there are good theoretical reasons for mistrusting the AI
engineering claims and goals; furthermore, I find it unlikely that the
presently fashionable, primitive computational model (FSA's and Turing
machines) is sufficient for adequate models of intelligence.

>-- 
>Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
>Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
>Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


