From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!munnari.oz.au!uunet!tdatirv!sarima Mon Dec  9 10:48:41 EST 1991
Article 1928 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!munnari.oz.au!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <302@tdatirv.UUCP>
Date: 6 Dec 91 17:14:39 GMT
Article-I.D.: tdatirv.302
References: <gdCb=YW00UhWQ2lpNp@andrew.cmu.edu> <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu> <1991Dec5.191043.10565@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 50

In article <1991Dec5.191043.10565@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|... miss the distinction that can be drawn between Searle's
|*logical argument*, namely, that syntax is not sufficient for semantics, and
|his *demonstration*, or *thought experiment*, namely, the Chinese Room.
|
|The strength of Searle's arugment is that, contrary to what some may claim,
|it does not rest on any particular way of telling the Chinese Room story.  The
|argument simply is that it is impossible to generate semantics from a purely
|syntactic system.  This, Searle argues, is a *logical* point, true simply in
|virtue of what the words "syntax" and "semantics" mean.  

Then humans do not understand either.  Or both humans and computers can
understand if programmed for semantics as well as syntax (whatever that
may mean).

The serious error is Searle's reasoning is that he has *never* shown any
*objective* evidence that my brain is doing anything that a computer attached
to appropriate input devices could not do.

And, since my knowledge of neurology suggests that all of my mental functions
are based on electro-chemical reactions in characterizable processing elements,
I must conclude that however our brain may achive meaning, it is computable.

I do doubt that a pure algorithm, lacking any sensory input modalities
could show intelligence.  But computers are jsut as capable of processing
and encoding semse data as the human nervous system.

|  What is required from the
|supporters of strong AI is an account of why the *logical argument* fails,
|that is, an account of how syntax *by itself* can generate semantics.

Or how about a challenge to Searle's definition of semantics which excludes
the very method by which the human brain establishes meaning, namely
association of 'symbols' with encoded sensory data.

Thus, I maintain that computers are just as capable of semantic processing
as are humans.  Thus his argument, while strictly true, does not apply to
real computers, only to his naive preconceptions about computers.

| I know
|of no critic of Searle who offers such an account.  Note that merely
|gainsaying the point by claiming that syntax *can* generate semantics
|(as the Churchland's do) is *not* an argument, but merely contradiction.

I do not claim this, I claim that he does not know how to recognize
semantics when he sees it.  As far as I can tell he would deny semantics
to humans (assuming I am right and we get meaning through encoded sense data).
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


