From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!lsuc!uunet.ca!uunet!zephyr.ens.tek.com!uw-beaver!cornell!rochester!yamauchi Mon Dec  9 10:48:08 EST 1991
Article 1873 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!lsuc!uunet.ca!uunet!zephyr.ens.tek.com!uw-beaver!cornell!rochester!yamauchi
>From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu>
Date: 5 Dec 91 12:01:16 GMT
References: <gdCb=YW00UhWQ2lpNp@andrew.cmu.edu>
Sender: yamauchi@cs.rochester.edu (Brian Yamauchi)
Organization: University of Rochester
Lines: 58
In-Reply-To: Franklin Boyle's message of Mon,  2 Dec 1991 12:52:36 -0500
Nntp-Posting-Host: heron.cs.rochester.edu

In article <gdCb=YW00UhWQ2lpNp@andrew.cmu.edu> Franklin Boyle <fb0m+@andrew.cmu.edu> writes:
>Searle's point is that there is no "understanding" by the person in the room, 
>the room itself, etc. When someone says something to you in a language 
>you "understand", you are doing more than formally processing symbol
>strings between the time you hear what was said and your response.
>For me, at least, I am conscious of mental images (sure, many of 
>our responses seem "automatic" so that we aren't always aware of such
>sensations).  These images come from internal representations of the
>referents of the words we hear, information we acquire independent
>of language (i.e. we have to see, feel, hear etc. the referents -- in
>short, experience them).  It is this latter information which enables
>us to "understand" in Searle's sense.

>Caveat: As noted above, a tacit assumption in Searle's scenario is that the 
>	 system is capable of passing the Turing Test, for Searle doesn't want
>	 to confound his argument about the system not "understanding"
>	 with other possible limitations.  But this actually is at the crux
>	 of the matter because it may be the case that the Turing Test
>	 (if it's possible to operationalize in the first place) cannot 
>	 be passed *without* such "understanding".

I agree with both your definition of understanding and your caveat.
I believe that any system capable of passing the Turing Test will need
to have experienced the world through its sensors and interacted with
the world through its effectors -- and both sensors and effectors will
need to have at least a partial similarity to those of humans (i.e.
vision, sound, touch, etc.).

Searle's reply would be that we can just encode these memories into
his "rule book" -- which now needs to encode not only a hugely complex
set of fixed rules, but memory, learning, perception, and sensorimotor
control as well.  In this case, I would say, yes, the room has
understanding -- but at this point the absurdity of Searle's metaphor
becomes rather obvious.

Searle appeals to common sense -- "a room with a rule book, scraps of
paper, and an ignorant human" can't have understanding -- but he does
so by presenting a metaphor that defies common sense.  Sure a
room+book+paper+ignoramus can't "understand", but it can't possibly
perform all of the tasks required of it (complex reasoning, learning
from experience, sophisticated perception) either, so what's the
point?

Speed matters.  When your metaphorical systems have response times on
a scale greater than that of human lifetimes -- not to mention space
requirements of interplanetary magnitude -- it's time to find a new
metaphor.

Consider an equally poor metaphor -- this time for human rather than
machine intelligence -- a Russian Electrochemistry Set that encodes
inputs and outputs with chemical reactions that flow through a network
glass tubes.  Suppose these inputs and outputs can be mapped to
responses in Russian that simulate those of an intelligent human.
Then encode the synapse potentials and neuron activity of a human
brain into a set of electrochemical reactions.  Now the system seems
to understand, but obviously "common sense" dictates that test tubes,
flasks, and beakers can't understand Russian, therefore neither can
human beings...


