From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!linac!uchinews!spssig!markrose Mon Dec 16 11:01:22 EST 1991
Article 2065 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!linac!uchinews!spssig!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Searle, again
Message-ID: <1991Dec12.185711.26980@spss.com>
Date: Thu, 12 Dec 1991 18:57:11 GMT
References: <5826@skye.ed.ac.uk> <1991Dec11.180924.37884@spss.com> <1991Dec11.230822.698@psych.toronto.edu>
Nntp-Posting-Host: spssrs7.spss.com
Organization: SPSS, Inc.
Lines: 44

In article <1991Dec11.230822.698@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>Searle takes [the derivation of semantics from syntax]
>to be a *logical* impossibility.  His *conclusion* is
>that minds cannot solely be the result of formal computation, but must
>somehow arise due to the physico-chemical nature of the brain.  I personally 
>am leary of this conclusion, but I think his argument is correct.

This is not a conclusion of Searle's, it's an assumption-- his Axiom 4.
He assumes that there is something in the physical nature of the brain
which allows us to "form mental contents (semantics)".  If this axiom
is dubious, so are his further conclusions.

>Me too.  I find his positive thesis opaque, and, if I understand it
>correctly, incoherent.  But that does not mean that his negative
>argument is incorrect.  It merely means he is incorrect about how
>human minds get around the problem. 

But if you are left without a positive theory of how human minds refer
in a way algorithms cannot, the negative argument collapses.  Human
brains are machines, entirely physical (according to Searle).  They
refer, somehow.  How do neurons do this little trick?  If you don't know,
you can't say algorithms don't.

This business of "syntax can't yield semantics" is again an axiom in
Searle's argument, not a conclusion.  Why doesn't it apply to the brain?
Why isn't the meaning and thought in your brain "meaningless" chemical
activity?  Surely a neuron's firing is as ambiguous as a computer's symbol.

Let's be very clear: Searle himself is willing to concede that the
Chinese Room algorithm precisely duplicates the workings of the brain,
down to the last synapse.  Here you have a brain, which we admit has
semantics.  There you have a simulation of a brain.  What is the
difference between them?  That's the crux of the argument.
If you can't find a difference, then either the robot is thinking, or
the brain is as subject as the robot to the debunking of the Chinese Room.

To my mind Searle's concession, here, was a fatal overextension.  I take the
Chinese Room story as a compelling response to AI projects such as
Schank's, in which stories are fed to a computer and indeed manipulated
by "pushing symbols around."  I feel the same dissatisfaction I presume
Searle felt hearing the claim that these programs "understand."  But my
objection would be that a program which does not have the same immense
bank of real-world experience which humans have cannot be said to
understand.  


