From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!uunet!tdatirv!sarima Mon Dec  9 10:48:25 EST 1991
Article 1903 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese Room, from a different perspective
Message-ID: <298@tdatirv.UUCP>
Date: 5 Dec 91 19:29:46 GMT
References: <4dCxHKK00WBKA3VAgN@andrew.cmu.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 47

In article <4dCxHKK00WBKA3VAgN@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
|Based on this, I'm not sure you've read Searle, but suffice it to
|say that, because of the "systems reply", he did allow that the man
|in the room could interalize what was in the book, so that the man
|(everything from his skin inward) *is* the system ... 
|
| Let's suppose you have memorized Searle's hypothesized rule book
|(so what you have is a lot of associations between different symbols and
|groupings of symbols as well as chains of such associations).  You have *no*
|idea what the symbols refer to, however, and, therefore, what they or
|groupings of them mean.  Now, let someone write you a message in Chinese
|asking you what colors daffodils are. Then let someone write you a message
|in English asking the same question.  Do you "understand" both questions?
|(note: this is different than; Can you respond to both questions with the
|appropriate answer?). Presumably you didn't understand the first question 
|(at least I wouldn't). Why do you think you didn't?

The English speaking 'you' didn't understand the Chinese question, and the
Chinese speaking 'you' didn't understand the English question.

At this point I would say that there are two entities inside of the man's
body, his original self, and a new one based on the 'rule book'.  They are
two different personalities that are being time-shared within one body.
This is not that different from the result of cutting the connection between
the two halves of the brain - two people in one body.

|Consider the problem *physically*.  Do you think you can determine what
|trees, for example, look like or feel like from the *physical* structures
|of all the symbols and symbol strings that have to do with trees? (remember,
|*you* don't know what those symbols or symbol strings mean, so *the only* 
|possible sources of information are the physical structures of the symbol
|strings and associations between those physical structures.)

If this were true then the 'Chinese room' could not pass the Turing test.
It would be seriously deficient in its ability to discuss the outside world.
This is exactly why I do not believe that the 'Chinese room' is even possible
as it is presented - it fails to deal with the need to relate sentences to
the outside world.  But once you add this feature it is no different in this
respect than a person.

Thus, I consider the Chinese room to be a 'straw-man' argument, since it
assumes a form of 'intelligence' that is not even possible for humans.  Even
we must relate meaning to the outside, we cannot interact entirely on internal
responses.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


