From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!news Sun Dec  1 13:06:30 EST 1991
Article 1736 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1209 comp.ai.philosophy:1736
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!news
>From: turpin@cs.utexas.edu (Russell Turpin)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Zeleny's argument on denotation, 1st branch.
Followup-To: sci.philosophy.tech
Date: 28 Nov 91 22:56:30 GMT
Organization: U Texas Dept of Computer Sciences, Austin TX
Lines: 65
Message-ID: <kjat0uINNcbo@cs.utexas.edu>
References: <1991Nov27.115032.5957@husc3.harvard.edu>
Summary: Refer to the second branch.
Keywords: denotation, sense, communication

-----
I wrote:
>> Why must this causal relation be "entirely immanent in [neural]
>> activity"?  It seems possible -- indeed, likely -- that a causal
>> account of denotation would involve a description of how the
>> concerned neural states arise through the person's interaction
>> with the world.  As a simple example, consider a child who hears
>> others use the word "hot" when he touches a warm mug, and who
>> comes to associate the sensation of heat with this word, and who
>> then begins to use this word to denote the sensation of heat. 

In article <1991Nov28.153335.5974@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
> Whatever the connection is, it must be entirely subsumed by whatever
> constitutes the mind (but not necessarily in a way completely transparent
> to its awareness), since there are no invisible threads connecting the sign
> tokens in the mind (whatever their form) with the external objects denoted
> by them (whatever they might be).  The regularity in the child's behavior
> may be interpreted by an outsider as a causal connection; however, unless
> this regularity is somehow internalized by the mind, it cannot be construed
> as such from the child's first-person perspective.

I would suggest that "this regularity is ... internalized by the
mind" in the child's understanding that others also use the word
"hot" for this purpose, and so the sign is useful for
communication.  I would add two comments.  First, it is unclear
when children begin to explicitly understand what it is they are
doing in using various words.  The meaning of communication is
something that must be learned.  That children gradually learn
how to use language suggests to me that the path from reflex to
innate and automatic communication to a more reflective and 
"semantic" understanding of communication is continuous.  Second, 
the reflective understanding of communication takes us into the
second branch of Mr Zeleny's argument, regarding the intensional
aspect of sign use.  I continue *that* discussion in another
post. 

> If you mean Kripke's causal theory of name meaning, this is one good reason
> why it must be mistaken.  As for your example of temperature, consider a
> thermometer capable of speech, but bereft of mental representations of the
> sort we are discussing.  When it says `hot', does it refer to the outside
> temperature, i.e. to the molecular movement, or to its own readout,
> causally influenced thereby?  I contend the latter, and analyze the former
> as our interpretation, made in accordance with our mental representations
> of temperature, which are in turn related to the abstract concept thereof.

Again, I would point to a continuous spectrum where Mr Zeleny
seems to see a discrete difference.  An infant's first use of
"mama" may mean little more than a display of the infant's feelings
at the time: "I am hungry! I am wet! I am tired!"  This is 
undoubtedly much more than an automated thermometer's read-out
of *its* internal state, but it is far from clear that the infant
has the semantic capabilities that Mr Zeleny requires for the word
to be a "true" sign.  

Indeed, for Mr Zeleny's argument to work, there must be some point
during human development when one suddenly acquires access to the
"transfinite hierarchy" of meaning.  It is difficult, after all,
to understand how access to an infinite structure is *gradually*
acquired.  I presume that prior to this point, Mr Zeleny does
not argue that a human has computational capabilities beyond those
of Turing machines.  I am curious as to where Mr Zeleny identifies
this point.  (Implications regarding abortion should be limited,
or directed to the appropriate newsgroup.)

Russell


