From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!caen!sdd.hp.com!elroy.jpl.nasa.gov!ames!agate!boulder!ucsu!spot.Colorado.EDU!boroson Wed Feb  5 11:55:46 EST 1992
Article 3353 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3353 sci.philosophy.tech:2006
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!batcomputer!caen!sdd.hp.com!elroy.jpl.nasa.gov!ames!agate!boulder!ucsu!spot.Colorado.EDU!boroson
>From: boroson@spot.Colorado.EDU (BOROSON BRAM S)
Subject: Re: Is understanding algorithmic?
Message-ID: <1992Feb1.015307.18388@ucsu.Colorado.EDU>
Sender: news@ucsu.Colorado.EDU (USENET News System)
Nntp-Posting-Host: spot.colorado.edu
Organization: University of Colorado, Boulder
References: <1992Jan26.014607.8073@husc3.harvard.edu> <6523@pkmab.se> <1992Jan28.122457.8161@husc3.harvard.edu>
Date: Sat, 1 Feb 1992 01:53:07 GMT
Lines: 117

In article <1992Jan28.122457.8161@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>In article <6523@pkmab.se> 
>ske@pkmab.se (Kristoffer Eriksson) writes:
>
>>In article <1992Jan26.014607.8073@husc3.harvard.edu> 
>>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>
>True.  Having given my argument on several occasions in the past, I felt
>disinclined to repeat it, since its technical nature is liable to lead to
>misinterpretation by readers not versed in analytic philosophy.  Well, here
>goes. 
>
>It is commonly assumed that computers are capable of symbol manipulation;
>an analogous claim is sometimes made on behalf of human brains, neural
>pulses being interpreted as the symbols in question.  However, in
>considering such claims, we must be careful about what we mean by `symbol'.
>In philosophical use, this term is interpreted as a synonym of `sign' (cf.
>the use by Whitehead), sometimes used as standing for a conventional,
>substitutive sign (e.g.  by Peirce and Morris), or, alternatively, as an
>iconic, analogical sign (e.g. by Kant and Hegel).
>
>Now, if neural pulses or the internal states of an FSA are indeed symbols
>in the above sense, it seems reasonable to pose a question of what is the
>material (for it must be such under the assumptions of reductive
>materialism, assumed by Dennett & Co.)  property in virtue of which they
>stand their referents, in accordance with the traditional characterization
>of the sign by the formula *aliquid stat pro aliquo*.  The problem with
>identifying such a property is twofold.
>
>If, on one hand, one identifies the neural pulses as purely denotative
>signs, ones that refer without expressing, one would be forced to postulate
>a causal relation in virtue of which these signs denote, stipulating that
>this causal relation is itself entirely immanent in nervous activity, in
>direct contradiction to the fact that our language, allegedly founded
>solely on such nervous activity, has no trouble referring to objects and
>phenomena that occur outside of the latter.  For, on one hand, if an entity
>can be said to refer, the mechanism of such reference must be taken as
>being wholly within the provenance of the entity in question, to the extent
>that we are justified in ascribing the reference to the said entity, rather
>than to the extrinsic factors of its relation to its environment; on the
>other hand, once we reject solipsism, we are forced to infer an external
>reality of potential denotata, unconnected to our putative subject in any
>manner that can be wholly subsumed by it.
>
There are no ``causal relations'' in anything, as Hume has shown.  Reference
consists entirely of a correlation between information in neural pulses (etc.)
and information in the outside world.  Of course we can refer to objects
that do not exist (Centaurs, sets, etc.) but this is always by a combination
and reshuffling of our ideas about objects that do exist.

In this interpretation, reference *does* have to do with external entities,
since this correlation of information would not exist without an external
entity.  Reference is not a wholly internal affair.

>On the other hand, should one assume that neural pulses are connotative
>signs, which refer by virtue of expressing an intensional meaning, then
>such meanings, by the above observation, must be entirely captured in the
>physical states of the brain.  Now, as I have argued elsewhere on the
>Putnam thread, it's well known that intensions, once admitted, bring in a
>transfinite hierarchy thereof; in other words, on the connotative theory,
>reference depends on the grasp of (and, under the reductive materialist
>assumption, physical embodiment of) meanings, which depend on meanings of
>meanings, which in turn depend on meanings of meanings of meanings, and so
>on.  For at each intensional level it is reasonable to interpret the
>concept as yet another sign, asking what is the factor in virtue of which
>it succeeds in referring to an object; in other words, it does us no good
>to argue that in practice a brain or a computer only uses a finite initial
>segment of the intensional hierarchy, for the question of the nature of
>reference will only reappear on the highest admitted level thereof.  On the
>assumption that the brain, like a computer, is a finite state automaton,
>this amounts to a reductio ad absurdum of materialist semantics.  Moreover,
>as is well-known, classical model-theoretic semantics is incapable of fully
>characterizing reference, and ipso facto it is incapable of sufficiently
>constraining any derived operational criteria that purport to implement the
>AI notion of success of reference.  Thus, if I am right, AI projects of
>creating a machine capable of signifying independently of its creator,
>surely a prerequisite for machine intelligence, are doomed to failure.
>

If it is not too much trouble, could you or someone else repost what you
said on the Putnam thread?  I find it hard to imagine how reference could
depend on meaning on *any* level.

When I refer to something (say an apple) within my own brain, I am aware only
of a web of sensory memories and abstractions derived from my experiences
with apples.  Where does meaning come in?

>Now, in Mark's example, the machines are programmed in a deterministic
>manner; I take it that this programming is performed by a rational agent
>(N.B. for David Gudeman: this factor, pace Kant and others, serves to
>assign all moral responsibility).  This agent stipulates the operational
>characteristics of the machines to the extent that he is capable of
>controlling their functioning within their environment.  As I stated above,
>the machines merely succeed in matching an internal representation of their
>location, as well as of some objects likely to be found there, with a
>preprogrammed description, perhaps through the use of a visual pattern
>matching algorithm.  The reference, if any, belongs to the programmer.

I don't know.  Matching an internal representation with sensory data
sounds like reference to me.

I know positivism is out of fashion in some circles, but Zeleny seems
to think showing someone believes it is akin to refuting them.  I would
guess the fraction of physical scientists who are positivists is as high 
as the fraction of mathematicians who are Platonists.  Glashow, in a debate
with Sandra Harding, said he was a very firmly committed positivist, for 
example.  And read Michael Freedman's _Philosophy of Spacetime Theories_:
while relativity theory does not provide direct support for logical positivism,
there is a definite but subtle historical link between the two.


------------------
BRAM
Recursive
Acronym
Man
------------------


