From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!news Sun Dec  1 13:06:10 EST 1991
Article 1702 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:1702 sci.philosophy.tech:1194
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!news
>From: turpin@cs.utexas.edu (Russell Turpin)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Zeleny's argument on denotation.
Followup-To: sci.philosophy.tech,comp.ai.philosophy
Date: 28 Nov 91 01:46:02 GMT
Organization: U Texas Dept of Computer Sciences, Austin TX
Lines: 120
Message-ID: <kj8iiqINNd35@cs.utexas.edu>
References: <43772@mimsy.umd.edu> <1991Nov27.111048.4933@odin.diku.dk> <1991Nov27.115032.5957@husc3.harvard.edu>
Summary: I don't see it.
Keywords: denotation, sense, communication

-----
In article <1991Nov27.115032.5957@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
> Very well, I shall repeat my argument. ...

For which I am grateful.  The first two times I read Mr Zeleny's
argument, I did not understand it enough to comment.  Now, I
*think* I am beginning to see its outlines, but the more I see, 
the less convincing it becomes.

> Now, if neural pulses are indeed symbols in the above sense, it seems
> reasonable to pose a question of what is the material (for it must be such
> under the assumptions of reductive materialism, assumed by Dennett \& Co.)
> property in virtue of which they stand their referents, in accordance with
> the traditional characterization of the sign by the formula *aliquid pro
> aliquo*.  The problem with identifying such a property is twofold.
>
> If, on one hand, one identifies the neural pulses as purely denotative
> signs, ones that refer without expressing, one would be forced to postulate
> a causal relation in virtue of which these signs denote, stipulating that
> this causal relation is itself entirely immanent in nervous activity, in
> direct contradiction to the fact that our language, allegedly founded
> solely on such nervous activity, has no trouble referring to objects and
> phenomena that occur outside of the latter.

Why must this causal relation be "entirely immanent in [neural]
activity"?  It seems possible -- indeed, likely -- that a causal
account of denotation would involve a description of how the
concerned neural states arise through the person's interaction
with the world.  As a simple example, consider a child who hears
others use the word "hot" when he touches a warm mug, and who
comes to associate the sensation of heat with this word, and who
then begins to use this word to denote the sensation of heat. 

There is a causal connection between the word "hot" and what the 
child denotes by it, but this connection is NOT "entirely 
immanent" in the child's neural activity.  It involves a historic
interaction involving the child's neural activity, the world, and
other speakers.  None of this requires the child's cognitive
abilities to exceed that of an FSA.  (Moreover, the philosophic
literature is replete with accounts of denotation that involve
this kind of interaction, especially in the explanation of proper
names.)

> On the other hand, should one assume that neural pulses are connotative
> signs, which refer by virtue of expressing an intensional meaning, then
> such meanings, by the above observation, must be entirely captured in the
> physical states of the brain.  Now, as I have argued elsewhere on the
> Putnam thread, it's well known that intensions, once admitted, bring in a
> transfinite hierarchy thereof; in other words, on the connotative theory,
> reference depends on the grasp of (and, under the reductive materialist
> assumption, physical embodiment of) meanings, which depend on meanings of
> meanings, which in turn depend on meanings of meanings of meanings, and so
> on.  Note that it does you no good to argue that in practice a brain only
> uses a finite initial segment of the intensional hierarchy, for the
> question of the nature of reference will only reappear on the highest
> admitted level thereof. ...

It is not a problem that the question reappears.  For Mr
Zeleny's opponents, it is enough that people, in fact, do not
resolve the question through an infinity of levels.

> ... On the assumption that the brain is a finite state automaton,
> this amounts to a reductio ad absurdum of materialist semantics.

This is only so if people, in fact, resolve the question through
the infinite "intensional hierarchy" that Mr Zeleny poses.  If
they stop after some initial segment to which we can set a constant
upper bound, then they do no more than what an FSA can do.  If they
stop after some finite, but unbounded, initial segment, then they
do no more than what Turing machines can do.  Only if they actually
resolve the question through the entire infinite hierarchy does it
seem that they demonstrate some computational capacity that exceeds
that of Turing machines.

> For an analogous example, consider the integers.  It's well-known that
> no complete recursive axiomatization of elementary arithmetic can be
> given; furthermore, the axioms of the first-order PA are not even
> categorical, i.e. they fail to characterize their models up to
> isomorphism.  In spite of all that, human mathematicians seem to have
> no difficulty in operating with semantical notions like that of the
> standard model of the natural numbers, which inherently can't be
> captured by a FSA.

The Boyer-Moore theorem prover can "work with" semantic notions
like that of the standard model.  (I won't say that it does this
without difficulty, but then I would not claim this of people
either!)  By "work with", I mean, for example, that the Boyer-Moore
theorem prover can prove that 2nd-order PA is categorical.  Of
course, it cannot determine whether or not an arbitrary proposition
is true for the standard model.  But then, neither can people.

I understand the analogy Mr Zeleny draws.  But it does not help.
In neither the second branch of his argument, nor this example, do
I understand what computational capacity Mr Zeleny attributes to 
people that is impossible for machines.

-----
I am not arguing here that people are FSA's, nor even that their
cognitive abilities are no greater than what might be implemented
through FSA's.  Mr Zeleny assumes that materialism requires
people to be FSA's, but this is far from clear.  An ANN with
real-valued weights, for example, is more powerful than an FSA.
(Real numbers require arbitrary storage.)  It *might* be argued
that physics imposes a finite state on any biochemical machine,
but this is far from clear, and Mr Zeleny has so far assumed
this, rather than argued it.  (For an example of the subtle
interactions between physics and computation, consider that David
Deutsch has shown how to create a machine that with some
probability computes an answer faster than is possible with a
deterministic Turing machine *if* the many-world interpretation
of quantum mechanics is true.)

I think it would be very interesting if someone could show that
human cognitive capacity exceeds what is possible with FSA's, 
or even what is possible with Turing machines.  But Mr Zeleny
has not yet convinced me.  Perhaps it is my misunderstanding
that sees gaps in his argument where there are in fact solid
deductions.  If so, I welcome his further explanation.

Russell


