From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!ogicse!hsdndev!burrhus!husc-news.harvard.edu!zariski!zeleny Tue Nov 26 12:31:06 EST 1991
Article 1459 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10320 sci.philosophy.tech:1034 comp.ai.philosophy:1459
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!ogicse!hsdndev!burrhus!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Summary: materialism refuted
Message-ID: <1991Nov21.005355.5696@husc3.harvard.edu>
Date: 21 Nov 91 05:53:54 GMT
Article-I.D.: husc3.1991Nov21.005355.5696
References: <15018@castle.ed.ac.uk> <1991Nov19.183901.5640@husc3.harvard.edu> <32905@uflorida.cis.ufl.EDU>
Organization: Dada
Lines: 109
Nntp-Posting-Host: zariski.harvard.edu

In article <32905@uflorida.cis.ufl.EDU> 
fred@mosquito.cis.ufl.edu (Fred Buhl) writes:

[A lecture on manners omitted]

FB:
>I liked Chris Malcolm's Searle-in-a-nutshell -- that seems to me to be
>_exactly_ what he's saying.  My problem with Searle is what he
>_doesn't_ say, at least not clearly to me -- what are his requirements
>for allowing a system to be said to have "understood" something?  When
>Searle visited UF last year, I had a chance to ask him some questions
>after his lecture.  Trying to summarize what I'd read and just heard,
>I asked him, "Understanding requires Consciousness?" and he agreed.  I
>also asked him, "No symbol-manipulation system will ever be
>Conscious?" and he agreed.  I then asked him whether or not the brain
>wasn't a symbol-processing system, (the symbols being neural pulses)
>but couldn't quite understand the refutation he gave -- wish I'd taped it.

Please be careful here about what you mean by `symbol'.  In philosophical
use, this term is interpreted as a synonym of `sign' (cf. the use by
Whitehead), sometimes used as standing for a conventional, substitutive
sign (e.g. by Peirce and Morris), or, alternatively, as an iconic,
analogical sign (e.g. by Kant and Hegel).  

Now, if neural pulses are indeed symbols in the above sense, it seems
reasonable to pose a question of what is the material (for it must be such
under the assumptions of reductive materialism, assumed by Dennett & Co.)
property in virtue of which they stand their referents, in accordance with
the traditional characterization of the sign by the formula *aliquid pro
aliquo*.  The problem with identifying such a property is twofold.  

If, on one hand, one identifies the neural pulses as purely denotative
signs, ones that refer without expressing, one would be forced to postulate
a causal relation in virtue of which these signs denote, stipulating that
this causal relation is itself entirely immanent in nervous activity, in
direct contradiction to the fact that our language, allegedly founded
solely on such nervous activity, has no trouble referring to objects and
phenomena that occur outside of the latter.  

On the other hand, should one assume that neural pulses are connotative
signs, which refer by virtue of expressing an intensional meaning, then
such meanings, by the above observation, must be entirely captured in the
physical states of the brain.  Now, as I have argued elsewhere on the
Putnam thread, it's well known that intensions, once admitted, bring in a
transfinite hierarchy thereof; in other words, on the connotative theory,
reference depends on the grasp of (and, under the reductive materialist
assumption, physical embodiment of) meanings, which depend on meanings of
meanings, which in turn depend on meanings of meanings of meanings, and so
on.  Note that it does you no good to argue that in practice a brain only
uses a finite initial segment of the intensional hierarchy, for the
question of the nature of reference will only reappear on the highest
admitted level thereof.  On the assumption that the brain is a finite state
automaton, this amounts to a reductio ad absurdum of materialist semantics.

FB:
>The problem is that Searle can't describe what specific properties are
>required for his definition of Consciousness, only what sort of
>systems _aren't_ Conscious.  Until he gives such definitions (and
>that's probably impossible at this stage to do), I don't see much
>point in discussing his ideas, since (IMHO) he's not properly defined
>his terms yet.  Most of the argument about Searle's ideas, I think,
>have to do with problems in the definitions of terms.  (I also have
>grave doubts about the system he's describing _ever_ passing the
>Turing test, with no short-term memory or learning ability, but that's
>_another_ problem).

I certainly don't purport to speak for Searle; on the other hand, purely
syntactical considerations analogous to the above argument, can reduce to
shreds Dennett's attempt to refute the Chinese Room argument by appealing
to the hackneyed "Systems Reply" (pp. 435--40 of his latest masterpiece,
"Consciousness Explained").  Quite aside from Dennett's ignorant and/or
charlatanic claim that the AI software is in principle different from "some
simple table-lookup architecture" (of course, all Turing machine or FSA
programs *are* instances of some simple table-lookup architecture), the
question to ask is whether such a system can be finite, while still
implementing the necessary semantical knowledge.  As can be readily seen
from the above, this is most manifestly not the case.

For an analogous example, consider the integers.  It's well-known that no
complete recursive axiomatization of elementary arithmetic can be given;
furthermore, the axioms of the first-order PA are not even categorical,
i.e. they fail to characterize their models up to isomorphism.  In spite of
all that, certain refractory Stanford scientists notwithstanding, human
mathematicians seem to have no difficulty in operating with semantical
notions like that of the standard model of the natural numbers, which
inherently can't be captured by a FSA.

>(I know, _another_ Searle/Chinese room post.  Just what we need. Sigh.)
>---------------------------------------------------------------------------
>Fred Buhl, Grad Student            A proud member of the Union of
>UF Computer Science Dept. - AI     Unconcerned Scientists.       
>fred@reef.cis.ufl.edu              "Ants are smart.  _Really_ smart." 
>---------------------------------------------------------------------------


'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


