From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!agate!dog.ee.lbl.gov!network.ucsd.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Wed Feb  5 11:56:11 EST 1992
Article 3394 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3394 sci.philosophy.tech:2016
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!agate!dog.ee.lbl.gov!network.ucsd.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Is understanding algorithmic?
Message-ID: <1992Feb2.020220.8338@husc3.harvard.edu>
Date: 2 Feb 92 07:02:18 GMT
Article-I.D.: husc3.1992Feb2.020220.8338
References: <6523@pkmab.se> <1992Jan28.122457.8161@husc3.harvard.edu> <1992Feb1.015307.18388@ucsu.Colorado.EDU>
Organization: Dept. of Math, Harvard Univ.
Lines: 174
Nntp-Posting-Host: zariski.harvard.edu

In article <1992Feb1.015307.18388@ucsu.Colorado.EDU> 
boroson@spot.Colorado.EDU (BOROSON BRAM S) writes:

>In article <1992Jan28.122457.8161@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>True.  Having given my argument on several occasions in the past, I felt
>>disinclined to repeat it, since its technical nature is liable to lead to
>>misinterpretation by readers not versed in analytic philosophy.  Well, here
>>goes. 
>>
>>It is commonly assumed that computers are capable of symbol manipulation;
>>an analogous claim is sometimes made on behalf of human brains, neural
>>pulses being interpreted as the symbols in question.  However, in
>>considering such claims, we must be careful about what we mean by `symbol'.
>>In philosophical use, this term is interpreted as a synonym of `sign' (cf.
>>the use by Whitehead), sometimes used as standing for a conventional,
>>substitutive sign (e.g.  by Peirce and Morris), or, alternatively, as an
>>iconic, analogical sign (e.g. by Kant and Hegel).
>>
>>Now, if neural pulses or the internal states of an FSA are indeed symbols
>>in the above sense, it seems reasonable to pose a question of what is the
>>material (for it must be such under the assumptions of reductive
>>materialism, assumed by Dennett & Co.)  property in virtue of which they
>>stand their referents, in accordance with the traditional characterization
>>of the sign by the formula *aliquid stat pro aliquo*.  The problem with
>>identifying such a property is twofold.
>>
>>If, on one hand, one identifies the neural pulses as purely denotative
>>signs, ones that refer without expressing, one would be forced to postulate
>>a causal relation in virtue of which these signs denote, stipulating that
>>this causal relation is itself entirely immanent in nervous activity, in
>>direct contradiction to the fact that our language, allegedly founded
>>solely on such nervous activity, has no trouble referring to objects and
>>phenomena that occur outside of the latter.  For, on one hand, if an entity
>>can be said to refer, the mechanism of such reference must be taken as
>>being wholly within the provenance of the entity in question, to the extent
>>that we are justified in ascribing the reference to the said entity, rather
>>than to the extrinsic factors of its relation to its environment; on the
>>other hand, once we reject solipsism, we are forced to infer an external
>>reality of potential denotata, unconnected to our putative subject in any
>>manner that can be wholly subsumed by it.

BB:
>There are no ``causal relations'' in anything, as Hume has shown.  Reference
>consists entirely of a correlation between information in neural pulses (etc.)
>and information in the outside world.  Of course we can refer to objects
>that do not exist (Centaurs, sets, etc.) but this is always by a combination
>and reshuffling of our ideas about objects that do exist.

Hume hasn't "shown" anything to anyone who doesn't accept the Lockean
empiricist premiss that all knowledge must come from experience or
introspection; even then, one could reject his argument because of
consideration advanced by Thomas Reid, that causal relations are known to
us directly from the experience of volition.  Furthermore, Frege has shown
that empiricist philosophy of mathematics is a flop.

BB:
>In this interpretation, reference *does* have to do with external entities,
>since this correlation of information would not exist without an external
>entity.  Reference is not a wholly internal affair.

I never claimed that it was.  Explain the correlation on empiricist grounds.

MZ:
>>On the other hand, should one assume that neural pulses are connotative
>>signs, which refer by virtue of expressing an intensional meaning, then
>>such meanings, by the above observation, must be entirely captured in the
>>physical states of the brain.  Now, as I have argued elsewhere on the
>>Putnam thread, it's well known that intensions, once admitted, bring in a
>>transfinite hierarchy thereof; in other words, on the connotative theory,
>>reference depends on the grasp of (and, under the reductive materialist
>>assumption, physical embodiment of) meanings, which depend on meanings of
>>meanings, which in turn depend on meanings of meanings of meanings, and so
>>on.  For at each intensional level it is reasonable to interpret the
>>concept as yet another sign, asking what is the factor in virtue of which
>>it succeeds in referring to an object; in other words, it does us no good
>>to argue that in practice a brain or a computer only uses a finite initial
>>segment of the intensional hierarchy, for the question of the nature of
>>reference will only reappear on the highest admitted level thereof.  On the
>>assumption that the brain, like a computer, is a finite state automaton,
>>this amounts to a reductio ad absurdum of materialist semantics.  Moreover,
>>as is well-known, classical model-theoretic semantics is incapable of fully
>>characterizing reference, and ipso facto it is incapable of sufficiently
>>constraining any derived operational criteria that purport to implement the
>>AI notion of success of reference.  Thus, if I am right, AI projects of
>>creating a machine capable of signifying independently of its creator,
>>surely a prerequisite for machine intelligence, are doomed to failure.

BB:
>If it is not too much trouble, could you or someone else repost what you
>said on the Putnam thread?  I find it hard to imagine how reference could
>depend on meaning on *any* level.

I will, as soon as I finish my paper.

BB:
>When I refer to something (say an apple) within my own brain, I am aware only
>of a web of sensory memories and abstractions derived from my experiences
>with apples.  Where does meaning come in?

Like I said, this view doesn't work with numbers; see Frege's "Foundations
of Arithmetic".  Moreover, your awareness of a term's meaning is surely not
a necessary criterion thereof; otherwise everyone would ipso facto always
know what he were talking about.

MZ:
>>Now, in Mark's example, the machines are programmed in a deterministic
>>manner; I take it that this programming is performed by a rational agent
>>(N.B. for David Gudeman: this factor, pace Kant and others, serves to
>>assign all moral responsibility).  This agent stipulates the operational
>>characteristics of the machines to the extent that he is capable of
>>controlling their functioning within their environment.  As I stated above,
>>the machines merely succeed in matching an internal representation of their
>>location, as well as of some objects likely to be found there, with a
>>preprogrammed description, perhaps through the use of a visual pattern
>>matching algorithm.  The reference, if any, belongs to the programmer.

BB:
>I don't know.  Matching an internal representation with sensory data
>sounds like reference to me.

What is representation?

BB:
>I know positivism is out of fashion in some circles, but Zeleny seems
>to think showing someone believes it is akin to refuting them. 

Not really; however, in light of the history of XXth century philosophy, it
suffices to make them look quite ridiculous.

BB:
>							         I would
>guess the fraction of physical scientists who are positivists is as high 
>as the fraction of mathematicians who are Platonists.

I think it is closer to the fraction of mathematicians who are Intuitionists.

BB:
>						       Glashow, in a debate
>with Sandra Harding, said he was a very firmly committed positivist, for 
>example.

I rather doubt that he understands the implications of the term.

BB:
>         And read Michael Freedman's _Philosophy of Spacetime Theories_:
>while relativity theory does not provide direct support for logical positivism,
>there is a definite but subtle historical link between the two.

Of course, Einstein was as thoroughgoing a realist as science has ever
produced.


>------------------
>BRAM
>Recursive
>Acronym
>Man
>------------------


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


