From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!spool.mu.edu!hri.com!ukma!hsdndev!husc-news.harvard.edu!zariski!zeleny Sun Dec  1 13:05:54 EST 1991
Article 1675 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:1675 sci.philosophy.tech:1183
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!spool.mu.edu!hri.com!ukma!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: On Denoting (was re: Arguments against Machine Intelligence)
Summary: do we?
Keywords: denotation, sense, communication
Message-ID: <1991Nov27.115032.5957@husc3.harvard.edu>
Date: 27 Nov 91 16:50:28 GMT
References: <43772@mimsy.umd.edu> <1991Nov27.111048.4933@odin.diku.dk>
Organization: Dept. of Math, Harvard Univ.
Lines: 248
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Nov27.111048.4933@odin.diku.dk> 
kurt@diku.dk (Kurt M. Alonso) writes:

>kohout@cs.umd.edu (Robert Kohout) writes:

RK:
>>The symbol-grouding argument is the modern day equivalent to Zeno's
>>paradox. Just because we find symbols defining symbols means the process
>>must continue infinitely, and just because we do not understand the
>>way such a system might converge on a representational system does
>>not mean it cannot be done. If this is to be taken as a serious objection,
>>please show me that it is computationally impossible to ground symbols on 
>>a digital machine.

Very well, I shall repeat my argument.  It is commonly assumed that
computers are capable of symbol manipulation; an analogous claim is
sometimes made on behalf of human brains, neural pulses being interpreted
as the symbols in question.  However, in considering such claims, we must
be careful about what we mean by `symbol'.  In philosophical use, this term
is interpreted as a synonym of `sign' (cf. the use by Whitehead), sometimes
used as standing for a conventional, substitutive sign (e.g.  by Peirce and
Morris), or, alternatively, as an iconic, analogical sign (e.g. by Kant and
Hegel).

Now, if neural pulses are indeed symbols in the above sense, it seems
reasonable to pose a question of what is the material (for it must be such
under the assumptions of reductive materialism, assumed by Dennett \& Co.)
property in virtue of which they stand their referents, in accordance with
the traditional characterization of the sign by the formula *aliquid pro
aliquo*.  The problem with identifying such a property is twofold.

If, on one hand, one identifies the neural pulses as purely denotative
signs, ones that refer without expressing, one would be forced to postulate
a causal relation in virtue of which these signs denote, stipulating that
this causal relation is itself entirely immanent in nervous activity, in
direct contradiction to the fact that our language, allegedly founded
solely on such nervous activity, has no trouble referring to objects and
phenomena that occur outside of the latter.

On the other hand, should one assume that neural pulses are connotative
signs, which refer by virtue of expressing an intensional meaning, then
such meanings, by the above observation, must be entirely captured in the
physical states of the brain.  Now, as I have argued elsewhere on the
Putnam thread, it's well known that intensions, once admitted, bring in a
transfinite hierarchy thereof; in other words, on the connotative theory,
reference depends on the grasp of (and, under the reductive materialist
assumption, physical embodiment of) meanings, which depend on meanings of
meanings, which in turn depend on meanings of meanings of meanings, and so
on.  Note that it does you no good to argue that in practice a brain only
uses a finite initial segment of the intensional hierarchy, for the
question of the nature of reference will only reappear on the highest
admitted level thereof.  On the assumption that the brain is a finite state
automaton, this amounts to a reductio ad absurdum of materialist semantics.

Purely syntactical considerations analogous to the above argument, can
readily be seen as invalidating Dennett's attempt to refute Searle's
Chinese Room argument by appealing to the ``Systems Reply'' (pp. 435--40 of
his ``Consciousness Explained'').  Quite aside from Dennett's bizarre claim
that the AI software is in principle different from ``some simple
table-lookup architecture'' (of course, all Turing machine or FSA programs
*are* by definition instances of some simple table-lookup architecture),
the question to ask is whether such a system can be finite, while still
implementing the necessary semantical knowledge.  As can be readily seen
from the above, this is most manifestly not the case.

For an analogous example, consider the integers.  It's well-known that
no complete recursive axiomatization of elementary arithmetic can be
given; furthermore, the axioms of the first-order PA are not even
categorical, i.e. they fail to characterize their models up to
isomorphism.  In spite of all that, human mathematicians seem to have
no difficulty in operating with semantical notions like that of the
standard model of the natural numbers, which inherently can't be
captured by a FSA.

To sum the above: consider Davidson's argument in ``Theories of Meaning and
Learnable Languages''.  Notably, his finite learnability criteria seem to
be required by any equivalent of a finite-state automaton with finite
memory.  Now, the semantical structure of the language described above is
not finitely learnable in the Davidson sense (although not for the reasons
given by Davidson himself), insofar as it presupposes the grasp of an
infinite hierarchy of senses for each of the terms (at least if we
interpret it in accordance with Frege's principle that intension determines
the extension, and that, for each level of intensions, our cognitive grasp
of the lower-level semantical entities can be seen as dependent on that of
higher, more finely differentiated intensional level), observing that in
the intensional hierarchy, each intensional object is an extension of its
ascendant concepts. This consideration can be seen as bearing on the
possibility of success in the AI research.  For even under the operational
criteria, such as those stipulated by the Turing test, the success of the
AI enterprise will depend on the theoretical adequacy of the theory of
reference implemented by it.  As noted above, classical model-theoretic
semantics is incapable of fully characterizing reference (also see an
overview in Lakoff's ``Women, Fire, and Dangerous Things'', chapter 15);
hence it is incapable of sufficiently constraining any derived operational
criteria that purport to implement the AI notion of success of reference.
Now, the alternative to model-theoretic semantics that I am advocating
above (the Frege-Church semantics) doesn't seem to lend itself to an
implementation, or even a representation, in finite-state automata.  Thus,
if I am right, AI projects are doomed to failure.

Compared to Dennett, McCarthy, & Co., I situate myself the other side of
Church's Thesis, emphasizing fortuitous, rather than effective
computability.  Strong AI apologists, on the other hand, seem to choose
siding with their predecessors McCulloch and Pitts, with their semantical
finiteness assumption already implicit in the quaint title of their
original AI manifesto, ``A Logical Calculus of the Ideas Immanent in
Nervous Activity'', their explicit, specious, unjustified identification of
``computability by an organism'' with Turing computability, their
counterfactual claims contradicting the preceding: ``every net, if
furnished with a tape [...] can compute only such numbers as can a Turing
machine, [...and] each of the latter numbers can be computed by such a
net'' (where does the tape come from?), and, finally, their monumental
misreading of Church's Thesis by conflating computability and effective
computability: ``This is of interest as affording a psychological
justification of the Turing definition of computability and its
equivalents, Church's $\lambda$-definability and Kleene's primitive
recursiveness: if any number can be computed by an organism, it is
computable by those definitions, and conversely.'' (See Boden's Philosophy
of AI anthology, page 37.)  In spite of all the flaws of Penrose's
argument, I'm more inclined to trust his (and Church's) intuitions than
this sort of discombobulated reductionist sloganeering.

RK:
>>The most frequent response to this challenge is an appeal to semantics,
>>which generally also implies an appeal to consciousness. These arguments
>>most commonly involve an intuition that 'meanings' cannot be conveyed
>>by formal digit flipping. Why not? and even if this is true, why are
>>such representations required for intelligent behavior? Once again,
>>since we see such representations in the brain, what properties of
>>brain architecture are not present in digital machines, and why aren't
>>discreet representations of analog information sufficient? Please don't
>>offer up the Chinese Room, for not only is it a flawed argument at the
>>most basic level, but is presumes an intelligent machine for the purposes
>>of demonstrating that the symbol cruncher cannot be said to 'understand'
>>anything. If this is the strongest objection one can raise to the
>>digital approach, I will sleep easily.

In light of the above, what you see in the brain can only be construed as
representations under your interpretation; taking McCarthy's example of a
thermometer, suppose you were to ascribe a fires-person viewpoint to a
device equipped with one.  In virtue of what physical property would such a
device interpret the thermometer's reading as what *we* understand as
temperature? 

RK:
>>I confess to having great difficulties following some of the various
>>philosophical stances. Perhaps that's what I prefer mathematics. 
>>Besides, I remember what Nietzshe said of 'old Kant' - that he essentially
>>proved what he wanted to prove, but that his desire for the result
>>was prior to the proof. If such an objection can be raised for Kant,
>>who among all great philsophers stands out as one of the most methodical
>>and even ponderous in his methods, I must remain sceptical of such 'proofs'.

"Old Kant" wasn't a mathematical logician; Frege was the originator of both
mathematical logic and formal semantics; Church is the most distinguished
living logician and philosopher of language.  Ignore them at your peril.

KA:
>I think that the main objection that has been presented in this group
> against the strong AI thesis is that a programmed computer can not
>have understanding, in the human sense.

So it is.

KA:
>Now, just to clarify things I will give my definition of understanding:
>"understanding is the phenomenon we experience when upon exposure to
>an isolated mental construction we find that this construction is
>coherent with previous knowledge we had. Such previous knowledge may
>consist of intuitively true 'facts' or of other mental constructions".

I define understanding as "appreciation of Swiss cheese".  Given that no
computer is capable of this feat, it follows that AI is wholly bogus.

On a more serious note, you might consider Aristotle's tripartite
conception of *dianoia* (understanding as discursive, sillogistic
reasoning), subdivided into *episteme* (knowledge for its own sake),
*techne* (knowledge for production), and *phronesis* (knowledge for
conduct) -- see Anal. Post. I.89b and II.100b.  Subsequently you might
ponder the general Platonic conception of *noesis* (intellection), as
presented in the Republic 509e--11d, placing *dianoia* in its context.

Alternatively, you might turn to the Moderns like "old Kant", with his
active faculty of understanding, which is the source of concepts (the first
Critique, A51/B75), the laws of which relate *a priori* to objects (ibid,
A57/B81).  Whatever you do, don't limit yourself to a spuriously produced
definition that presupposes, among other things, a coherence theory of
truth. 

KA:
>That understanding according to this definition requires self-consciousness
>should be clear. Also, it should be clear that the subject experiencing
>understanding is intentionally putting forward a desire of giving
>meaning to the mental construction. 

I don't understand your claim of putting forward a desire of giving meaning
to the mental construction.  When Aristotle makes his famous claim that all
men by nature desire to know, he certainly doesn't imply that this desire
is implicit in knowledge.  Why can't we understand apathetically?

KA:
>Now, what some people object against the strong AI thesis is that
>the formalism of Turing machines does not allow to model the humane
>semantic intentionality involved in understanding, mainly because
>the relation subject-object present in meaning-giving per se trascends
>the subject, and consequently, no theory of meaning can be formulated
>such that a TM can implement it. 

This looks like an adequate summary.

KA:
>This critique is clearly issued from strong philosophical premisses,
>namely that in assigning semantics, man is in some sense trascending
>himself, approaching ontologically far entities.

...only inasmuch as we succeed in denoting.  Do you think the name `Venus'
designates the Morning Star, or the bright spot in your fielf of vision
just before sunrise?

KA:
>The point we should now elucidate is whether by 'knowing' or giving
>meaning to entities man is in fact trascending himself, and in that
>case, whether this implies that no well defined formalism in
>a logical sense can describe such a semantics.

Good point, but a small correction: `well-defined' is not synonymous with
`finite'.  Do you thing that we are intrinsically incapable of transcending
our separate phenomenal microcosms through the use of language?

>Kurt.

>>Bob Kohout


'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


