From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!paperboy.osf.org!hsdndev!husc-news.harvard.edu!zariski!zeleny Fri Jan 31 10:26:36 EST 1992
Article 3223 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3223 sci.philosophy.tech:1981
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!paperboy.osf.org!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Is understanding algorithmic?
Message-ID: <1992Jan28.122457.8161@husc3.harvard.edu>
Date: 28 Jan 92 17:24:54 GMT
References: <1992Jan26.010642.24883@smsc.sony.com> <1992Jan26.014607.8073@husc3.harvard.edu> <6523@pkmab.se>
Organization: Dept. of Math, Harvard Univ.
Lines: 114
Nntp-Posting-Host: zariski.harvard.edu

In article <6523@pkmab.se> 
ske@pkmab.se (Kristoffer Eriksson) writes:

>In article <1992Jan26.014607.8073@husc3.harvard.edu> 
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>>In article <1992Jan26.010642.24883@smsc.sony.com> 
>>markc@smsc.sony.com (Mark Corscadden) writes:

MC:
> >>  Analogously, I'd like to suggest that
> >>someone who believes it possible to prove that Turing machines cannot
> >>refer help me to apply such a proof, step by step, to the scenario above.

MZ:
> >  Your machines merely succeed in matching an internal
> >representation of the laboratory location, as well as of some objects
> >likely to be found there with a preprogrammed description, perhaps through
> >the use of a visual pattern matching algorithm.

KE:
>This is merely an assertion, not the application of a proof that Mark
>Corscadden asked for.

True.  Having given my argument on several occasions in the past, I felt
disinclined to repeat it, since its technical nature is liable to lead to
misinterpretation by readers not versed in analytic philosophy.  Well, here
goes. 

It is commonly assumed that computers are capable of symbol manipulation;
an analogous claim is sometimes made on behalf of human brains, neural
pulses being interpreted as the symbols in question.  However, in
considering such claims, we must be careful about what we mean by `symbol'.
In philosophical use, this term is interpreted as a synonym of `sign' (cf.
the use by Whitehead), sometimes used as standing for a conventional,
substitutive sign (e.g.  by Peirce and Morris), or, alternatively, as an
iconic, analogical sign (e.g. by Kant and Hegel).

Now, if neural pulses or the internal states of an FSA are indeed symbols
in the above sense, it seems reasonable to pose a question of what is the
material (for it must be such under the assumptions of reductive
materialism, assumed by Dennett & Co.)  property in virtue of which they
stand their referents, in accordance with the traditional characterization
of the sign by the formula *aliquid stat pro aliquo*.  The problem with
identifying such a property is twofold.

If, on one hand, one identifies the neural pulses as purely denotative
signs, ones that refer without expressing, one would be forced to postulate
a causal relation in virtue of which these signs denote, stipulating that
this causal relation is itself entirely immanent in nervous activity, in
direct contradiction to the fact that our language, allegedly founded
solely on such nervous activity, has no trouble referring to objects and
phenomena that occur outside of the latter.  For, on one hand, if an entity
can be said to refer, the mechanism of such reference must be taken as
being wholly within the provenance of the entity in question, to the extent
that we are justified in ascribing the reference to the said entity, rather
than to the extrinsic factors of its relation to its environment; on the
other hand, once we reject solipsism, we are forced to infer an external
reality of potential denotata, unconnected to our putative subject in any
manner that can be wholly subsumed by it.

On the other hand, should one assume that neural pulses are connotative
signs, which refer by virtue of expressing an intensional meaning, then
such meanings, by the above observation, must be entirely captured in the
physical states of the brain.  Now, as I have argued elsewhere on the
Putnam thread, it's well known that intensions, once admitted, bring in a
transfinite hierarchy thereof; in other words, on the connotative theory,
reference depends on the grasp of (and, under the reductive materialist
assumption, physical embodiment of) meanings, which depend on meanings of
meanings, which in turn depend on meanings of meanings of meanings, and so
on.  For at each intensional level it is reasonable to interpret the
concept as yet another sign, asking what is the factor in virtue of which
it succeeds in referring to an object; in other words, it does us no good
to argue that in practice a brain or a computer only uses a finite initial
segment of the intensional hierarchy, for the question of the nature of
reference will only reappear on the highest admitted level thereof.  On the
assumption that the brain, like a computer, is a finite state automaton,
this amounts to a reductio ad absurdum of materialist semantics.  Moreover,
as is well-known, classical model-theoretic semantics is incapable of fully
characterizing reference, and ipso facto it is incapable of sufficiently
constraining any derived operational criteria that purport to implement the
AI notion of success of reference.  Thus, if I am right, AI projects of
creating a machine capable of signifying independently of its creator,
surely a prerequisite for machine intelligence, are doomed to failure.

Now, in Mark's example, the machines are programmed in a deterministic
manner; I take it that this programming is performed by a rational agent
(N.B. for David Gudeman: this factor, pace Kant and others, serves to
assign all moral responsibility).  This agent stipulates the operational
characteristics of the machines to the extent that he is capable of
controlling their functioning within their environment.  As I stated above,
the machines merely succeed in matching an internal representation of their
location, as well as of some objects likely to be found there, with a
preprogrammed description, perhaps through the use of a visual pattern
matching algorithm.  The reference, if any, belongs to the programmer.

>-- 
>Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
>Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
>Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske

All flames will be ignored; responses will be made solely at my discretion.

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


