From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken Mon Dec  9 10:48:43 EST 1991
Article 1933 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken
>From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <40332@dime.cs.umass.edu>
Date: 7 Dec 91 16:25:21 GMT
References: <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu> <1991Dec5.191043.10565@psych.toronto.edu> <302@tdatirv.UUCP>
Sender: news@dime.cs.umass.edu
Organization: University of Massachusetts, Amherst
Lines: 61

In article <302@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1991Dec5.191043.10565@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>|... miss the distinction that can be drawn between Searle's
>|*logical argument*, namely, that syntax is not sufficient for semantics, and
>|his *demonstration*, or *thought experiment*, namely, the Chinese Room.
>|
>|The strength of Searle's arugment is that, contrary to what some may claim,
>|it does not rest on any particular way of telling the Chinese Room story.  The
>|argument simply is that it is impossible to generate semantics from a purely
>|syntactic system.  This, Searle argues, is a *logical* point, true simply in
>|virtue of what the words "syntax" and "semantics" mean.  
>
>Then humans do not understand either. 

Case in point.

>The serious error is Searle's reasoning is that he has *never* shown any
>*objective* evidence that my brain is doing anything that a computer attached
>to appropriate input devices could not do.

It may not.

>And, since my knowledge of neurology suggests that all of my mental functions
>are based on electro-chemical reactions in characterizable processing elements,
>I must conclude that however our brain may achive meaning, it is computable.

This is revolutionary knowledge. Could you please cite some references which
provide *objective* evidence that there are *characterizable processing
elements* in the brain and that they are the seat of  all mental functions. 

>I do doubt that a pure algorithm, lacking any sensory input modalities
>could show intelligence.  But computers are jsut as capable of processing
>and encoding semse data as the human nervous system.

The *objective* evidence is still lacking. 

>Or how about a challenge to Searle's definition of semantics which excludes
>the very method by which the human brain establishes meaning, namely
>association of 'symbols' with encoded sensory data.

And the *objective* evidence that the "very method by which the human
bran establishes meaning" is via "association of 'symbols' with encoded
sensory data"? My guess is that there is no evidence, only your wild
conjecture.

>Thus, I maintain that computers are just as capable of semantic processing
>as are humans.  Thus his argument, while strictly true, does not apply to
>real computers, only to his naive preconceptions about computers.

Someone has naive preconceptions here, but it is not Searle.

>I do not claim this, I claim that he does not know how to recognize
>semantics when he sees it.  As far as I can tell he would deny semantics
>to humans (assuming I am right and we get meaning through encoded sense data).


Well that's a rather odd assumption to start with if you want to prove
it as a conclusion. On the other hand, it cuts out a lot of those tedious
intermediate steps.




