From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!hsdndev!husc-news.harvard.edu!zariski!zeleny Sun Dec  1 13:06:26 EST 1991
Article 1729 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1203 comp.ai.philosophy:1729
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Zeleny's argument on denotation.
Keywords: denotation, sense, communication
Message-ID: <1991Nov28.153335.5974@husc3.harvard.edu>
Date: 28 Nov 91 20:33:32 GMT
References: <1991Nov27.111048.4933@odin.diku.dk> <1991Nov27.115032.5957@husc3.harvard.edu> <kj8iiqINNd35@cs.utexas.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 210
Nntp-Posting-Host: zariski.harvard.edu

In article <kj8iiqINNd35@cs.utexas.edu> 
turpin@cs.utexas.edu (Russell Turpin) writes:

>In article <1991Nov27.115032.5957@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>> Very well, I shall repeat my argument. ...

RT:
>For which I am grateful.  The first two times I read Mr Zeleny's
>argument, I did not understand it enough to comment.  Now, I
>*think* I am beginning to see its outlines, but the more I see, 
>the less convincing it becomes.

We shall see about that.

MZ:
>> Now, if neural pulses are indeed symbols in the above sense, it seems
>> reasonable to pose a question of what is the material (for it must be such
>> under the assumptions of reductive materialism, assumed by Dennett \& Co.)
>> property in virtue of which they stand their referents, in accordance with
>> the traditional characterization of the sign by the formula *aliquid pro
>> aliquo*.  The problem with identifying such a property is twofold.
>>
>> If, on one hand, one identifies the neural pulses as purely denotative
>> signs, ones that refer without expressing, one would be forced to postulate
>> a causal relation in virtue of which these signs denote, stipulating that
>> this causal relation is itself entirely immanent in nervous activity, in
>> direct contradiction to the fact that our language, allegedly founded
>> solely on such nervous activity, has no trouble referring to objects and
>> phenomena that occur outside of the latter.

RT:
>Why must this causal relation be "entirely immanent in [neural]
>activity"?  It seems possible -- indeed, likely -- that a causal
>account of denotation would involve a description of how the
>concerned neural states arise through the person's interaction
>with the world.  As a simple example, consider a child who hears
>others use the word "hot" when he touches a warm mug, and who
>comes to associate the sensation of heat with this word, and who
>then begins to use this word to denote the sensation of heat. 

Whatever the connection is, it must be entirely subsumed by whatever
constitutes the mind (but not necessarily in a way completely transparent
to its awareness), since there are no invisible threads connecting the sign
tokens in the mind (whatever their form) with the external objects denoted
by them (whatever they might be).  The regularity in the child's behavior
may be interpreted by an outsider as a causal connection; however, unless
this regularity is somehow internalized by the mind, it cannot be construed
as such from the child's first-person perspective.

RT:
>There is a causal connection between the word "hot" and what the 
>child denotes by it, but this connection is NOT "entirely 
>immanent" in the child's neural activity.  It involves a historic
>interaction involving the child's neural activity, the world, and
>other speakers.  None of this requires the child's cognitive
>abilities to exceed that of an FSA.  (Moreover, the philosophic
>literature is replete with accounts of denotation that involve
>this kind of interaction, especially in the explanation of proper
>names.)

If you mean Kripke's causal theory of name meaning, this is one good reason
why it must be mistaken.  As for your example of temperature, consider a
thermometer capable of speech, but bereft of mental representations of the
sort we are discussing.  When it says `hot', does it refer to the outside
temperature, i.e. to the molecular movement, or to its own readout,
causally influenced thereby?  I contend the latter, and analyze the former
as our interpretation, made in accordance with our mental representations
of temperature, which are in turn related to the abstract concept thereof.

MZ:
>> On the other hand, should one assume that neural pulses are connotative
>> signs, which refer by virtue of expressing an intensional meaning, then
>> such meanings, by the above observation, must be entirely captured in the
>> physical states of the brain.  Now, as I have argued elsewhere on the
>> Putnam thread, it's well known that intensions, once admitted, bring in a
>> transfinite hierarchy thereof; in other words, on the connotative theory,
>> reference depends on the grasp of (and, under the reductive materialist
>> assumption, physical embodiment of) meanings, which depend on meanings of
>> meanings, which in turn depend on meanings of meanings of meanings, and so
>> on.  Note that it does you no good to argue that in practice a brain only
>> uses a finite initial segment of the intensional hierarchy, for the
>> question of the nature of reference will only reappear on the highest
>> admitted level thereof. ...

RT:
>It is not a problem that the question reappears.  For Mr
>Zeleny's opponents, it is enough that people, in fact, do not
>resolve the question through an infinity of levels.

How do you know that?  I believe that in accordance with the analysis I
present, people, and other signifying beings, indeed enjoy the access to
transfinite abstract meanings, albeit without complete awareness of so
doing.  Note that G\"odel in his 1944 and 1964 Russell and Cantor articles
made a similar claim, that we enjoy perception-like access to transfinite
mathematical objects.  Prove us wrong.

MZ:
>> ... On the assumption that the brain is a finite state automaton,
>> this amounts to a reductio ad absurdum of materialist semantics.

RT:
>This is only so if people, in fact, resolve the question through
>the infinite "intensional hierarchy" that Mr Zeleny poses.  If
>they stop after some initial segment to which we can set a constant
>upper bound, then they do no more than what an FSA can do.  If they
>stop after some finite, but unbounded, initial segment, then they
>do no more than what Turing machines can do.  Only if they actually
>resolve the question through the entire infinite hierarchy does it
>seem that they demonstrate some computational capacity that exceeds
>that of Turing machines.

You seem to misunderstand the nature of a reductio ad absurdum proof.  It
suffices to show that comprehension of the intensional hierarchy is
necessary for semantical understanding, and that no finite structure is
capable of such comprehension.  This is exactly what I have done.

MZ:
>> For an analogous example, consider the integers.  It's well-known that
>> no complete recursive axiomatization of elementary arithmetic can be
>> given; furthermore, the axioms of the first-order PA are not even
>> categorical, i.e. they fail to characterize their models up to
>> isomorphism.  In spite of all that, human mathematicians seem to have
>> no difficulty in operating with semantical notions like that of the
>> standard model of the natural numbers, which inherently can't be
>> captured by a FSA.

RT:
>The Boyer-Moore theorem prover can "work with" semantic notions
>like that of the standard model.  (I won't say that it does this
>without difficulty, but then I would not claim this of people
>either!)  By "work with", I mean, for example, that the Boyer-Moore
>theorem prover can prove that 2nd-order PA is categorical.  Of
>course, it cannot determine whether or not an arbitrary proposition
>is true for the standard model.  But then, neither can people.

No, it can't work with *any* semantic notions.  According to its published
description, the Boyer-Moore theorem prover is limited to the syntax of
quantifier-free, first-order logic with many-sorted function symbols, and
capable of induction on the ordinals up to $\epsilon_0$.  I can see how it
could prove categoricity of second-order PA; however I cannot see in what
sense this purely formal feat implies any sort of semantical ability.  Note
also that, in virtue of its syntactical limitations, the Boyer-Moore
theorem prover would seem to be incapable of giving a definition of
finitude, a second-order property; nor is it capable of transcending an
inherent limitation of Turing machines, deciding whether an arbitrary set
is finite.

RT:
>I understand the analogy Mr Zeleny draws.  But it does not help.
>In neither the second branch of his argument, nor this example, do
>I understand what computational capacity Mr Zeleny attributes to 
>people that is impossible for machines.

The computational capacity I am attributing to humans is that of fortuitous
computability, which often goes above and beyond the effective kind.  This
is more or less a claim based on mathematical intuition (which you may call
an article of faith), which I share with Church and Penrose, so feel free
to disregard it as irrational.

RT:
>I am not arguing here that people are FSA's, nor even that their
>cognitive abilities are no greater than what might be implemented
>through FSA's.  Mr Zeleny assumes that materialism requires
>people to be FSA's, but this is far from clear.  An ANN with
>real-valued weights, for example, is more powerful than an FSA.
>(Real numbers require arbitrary storage.)  It *might* be argued
>that physics imposes a finite state on any biochemical machine,
>but this is far from clear, and Mr Zeleny has so far assumed
>this, rather than argued it.  (For an example of the subtle
>interactions between physics and computation, consider that David
>Deutsch has shown how to create a machine that with some
>probability computes an answer faster than is possible with a
>deterministic Turing machine *if* the many-world interpretation
>of quantum mechanics is true.)

Deutsch' example has to do with the speed of computation, rather than with
its feasibility.  As for a materialist theory of mind that views it as
being more computationally powerful than an FSA, I leave that task up to
the Churchlands and their sorry lot.

RT:
>I think it would be very interesting if someone could show that
>human cognitive capacity exceeds what is possible with FSA's, 
>or even what is possible with Turing machines.  But Mr Zeleny
>has not yet convinced me.  Perhaps it is my misunderstanding
>that sees gaps in his argument where there are in fact solid
>deductions.  If so, I welcome his further explanation.

See above.

>Russell


'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


