From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!world!kohathi Tue Mar 24 09:55:33 EST 1992
Article 4444 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!world!kohathi
>From: kohathi@world.std.com (Kathleen E Coady)
Subject: Re: The Systems Reply I
Message-ID: <BL1p0D.6II@world.std.com>
Organization: The World Public Access UNIX, Brookline, MA
References: <6374@skye.ed.ac.uk> <1992Mar11.201637.21875@psych.toronto.edu> <1992Mar12.001918.2564@ccu.umanitoba.ca>
Date: Fri, 13 Mar 1992 09:29:00 GMT


	I'm somewhat confused, I believe.  It seems to be one of the premises
of the Chinese room that no method of imparting meaning to the symbols being
manipulated has been supplied, and therefore it is intuitively obvious that
the man in the room does not, merely by executing the rules, understand 
Chinese.
	It is also one of the premises that the Chinese room's answers to the
questions are reasonable answers...I believe that the idea is that this 
apparatus is capable of passing the Turing test.
	What I do not understand is that it isn't obvious to me that these two
premises aren't subtly contradictory.  The reason is that, explicitly or im-
plicitly, in order to ensure that the answers are reasonable, you must have,
embedded somewhere in the rules, a description of the properties and relation-
ships of the real-world objects to which the symbols refer.  Or, in other 
words, your rules must contain, in some fashion, the knowledge that apples 
are proper objects of the verb eat, and are not properly described by the 
adjective blue.  Please note that it doesn't matter how these rules are speci-
fied; the point is that they have to be in there.
	If you do not meet this condition, the answers the room generates may
be nonsensical or insane.
	If you do meet the condition, you have supplied the room with a view of
a world.  It isn't, now, intuitively obvious to me that the man in the room
doesn't understand the world you have supplied him with; I am not sure, if you
ask him if he understands the symbol language he is manipulating, that the 
answer will not, quite correctly, be yes.
	The only way I could be intuitively certain that the man in the Chinese
room was operating by rote and didn't understand the view of the world he was
given is to remove all the rules specifying the properties and relationships
of real-world objects.
	Please note that it's something of a red herring to debate the man's
subjective perception of the world; that it doesn't matter by what method the
real-world data is encoded in the rules; and that it's perfectly possible to
imagine doing this with rules for a wholly imaginary world, although such a
room obviously couldn't pass a Turing test slanted for this world.
	In other words, either this apparatus has semantic data, which we have
inadvertently supplied it disguised as rules, and passes the Turing test; or
it doesn't have semantic data, and fails.
	Please note also that it's not necessary to do anything to the ordinary
definition of understanding in this construction.  The man in the Chinese room
may not understand the world the same way an observer outside it understands 
it.  But it no longer seems intuitively certain that he doesn't understand the
view of it that has been presented to him, at least to me.


