From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Dec 16 11:00:32 EST 1991
Article 1977 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec9.172000.3236@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu> <1991Dec5.191043.10565@psych.toronto.edu> <302@tdatirv.UUCP>
Date: Mon, 9 Dec 1991 17:20:00 GMT

In article <302@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1991Dec5.191043.10565@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>|The strength of Searle's arugment is that, contrary to what some may claim,
>|it does not rest on any particular way of telling the Chinese Room story.  The
>|argument simply is that it is impossible to generate semantics from a purely
>|syntactic system.  This, Searle argues, is a *logical* point, true simply in
>|virtue of what the words "syntax" and "semantics" mean.  
>
>Then humans do not understand either.  Or both humans and computers can
>understand if programmed for semantics as well as syntax (whatever that
>may mean).

Whatever *syntax* might mean?!!!  I thought its definition was clear --
purely symbol-based manipulation, rules that operate on the *form* of
marks, and not on their content.

And, as far as I understand it, the question to be decided is whether
such a thing a "programming syntax" is possible.  Searle argues that it
is not, and (importantly) argues that understanding comes about in
virtue of the physical characteristics of our brains.  Functionalism
is not enough.

>
>The serious error is Searle's reasoning is that he has *never* shown any
>*objective* evidence that my brain is doing anything that a computer attached
>to appropriate input devices could not do.

He is providing a *logical* argument.  It is true (Searle asserts) due to the
meaning of the terms.  No evidence is required.

However, the performance of the Chinese Room demonstration could easily 
provide the objective evidence you seek.  I would still claim that, no
matter what input devices you hook up, you would still not understand Chinese.

>
>And, since my knowledge of neurology suggests that all of my mental functions
>are based on electro-chemical reactions in characterizable processing elements,
>I must conclude that however our brain may achive meaning, it is computable.

Here, Searle would disagree with you.  By analogy, my knowledge of elasticity
suggests that all of the functions of elasticity are based on physical
properties in discrete elements.  But I can't conclude that an appropriately
programmed computer is elastic.

Searle argues (although I do not necessarily agree with him) that it is
precisely the *physical* aspects of the electro-chemical reactions, and
*not* merely their formal properties, which are necessary for understanding.

>
>I do doubt that a pure algorithm, lacking any sensory input modalities
>could show intelligence.  But computers are jsut as capable of processing
>and encoding semse data as the human nervous system.

But Searle's argument *assumes* that such inputs aren't necessary, that is,
he allows strong AI its *strongest* form.  You can include special inputs
if you like, but that merely argues *against* the possibility of 
purely computational AI, which Searle is quite happy to assume (at least for
the moment...)

>
>|  What is required from the
>|supporters of strong AI is an account of why the *logical argument* fails,
>|that is, an account of how syntax *by itself* can generate semantics.
>
>Or how about a challenge to Searle's definition of semantics which excludes
>the very method by which the human brain establishes meaning, namely
>association of 'symbols' with encoded sensory data.

Association *by itself* is not meaning.

>
>Thus, I maintain that computers are just as capable of semantic processing
>as are humans.  Thus his argument, while strictly true, does not apply to
>real computers, only to his naive preconceptions about computers.

What do you mean by "real computers"?  Searle's argument *in principle*
applies to any architecture, even connectionism.

>
>| I know
>|of no critic of Searle who offers such an account.  Note that merely
>|gainsaying the point by claiming that syntax *can* generate semantics
>|(as the Churchland's do) is *not* an argument, but merely contradiction.
>
>I do not claim this, I claim that he does not know how to recognize
>semantics when he sees it.  As far as I can tell he would deny semantics
>to humans (assuming I am right and we get meaning through encoded sense data).

Well, Searle knows that *he* understands, so he doesn't deny it to himself.  I
would be tempted to say, however, that he might deny it to some of his
critics... :-)

- michael



