From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Dec  9 10:48:29 EST 1991
Article 1909 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <gdDwSR600Uh_I3JvgL@andrew.cmu.edu>
Date: 6 Dec 91 18:55:25 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 25

Brian Yamauchi writes:

> I agree with both your definition of understanding and your caveat.
> I believe that any system capable of passing the Turing Test will need
> to have experienced the world through its sensors and interacted with
> the world through its effectors -- and both sensors and effectors will
> need to have at least a partial similarity to those of humans (i.e.
> vision, sound, touch, etc.).
>  
> Searle's reply would be that we can just encode these memories into
> his "rule book" -- which now needs to encode not only a hugely complex
> set of fixed rules, but memory, learning, perception, and sensorimotor
> control as well.  In this case, I would say, yes, the room has
> understanding -- but at this point the absurdity of Searle's metaphor
> becomes rather obvious.
>  

The key word above is "encode".  As long as they're encoded (presumably
by those who constructed the book), they will, in general, not transmit
aspects of the outside world that will enable the person in the room to
have information about it.  I say "in general" because they may be
structure-preserving -- essentially re-presentations of the sensory
input (see Harnad's symbol grounding paper in Physica D).

-Frank


