From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!yale!hsdndev!rutgers!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Dec 16 11:00:38 EST 1991
Article 1987 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!yale!hsdndev!rutgers!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese Room, from a different perspective
Message-ID: <EdEwEvO00iUzA2j8J7@andrew.cmu.edu>
Date: 9 Dec 91 19:30:03 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 19

Stanley Friesen writes:

> If this were true then the 'Chinese room' could not pass the Turing test.
> It would be seriously deficient in its ability to discuss the outside world.
> This is exactly why I do not believe that the 'Chinese room' is even possible
> as it is presented - it fails to deal with the need to relate sentences to
> the outside world.  But once you add this feature it is no different in this
> respect than a person.
>
> Thus, I consider the Chinese room to be a 'straw-man' argument, since it
> assumes a form of 'intelligence' that is not even possible for humans.  Even
> we must relate meaning to the outside, we cannot interact entirely on
internal
> responses.

I quite agree.  The fundamental issue is *how* you physically implement this
feature when you add it.

-Frank


