From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:56:58 EST 1992
Article 4567 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <6432@skye.ed.ac.uk>
Date: 18 Mar 92 18:33:23 GMT
References: <1992Mar17.213546.25838@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 51

In article <1992Mar17.213546.25838@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>> ...we still need something to say which of the following would obtain:
>> 
>>   1. The person in the Room (not some second person) would
>>      understand Chinese.
>> 
>>   2. A second person would be created and would continue to
>>      exist so long as the person in the Room continued to follow
>>      the memorized rules.
>> 
>>   3. A second person would be created and would persist no matter
>>      what the original person did (perhaps because memorizing the
>>      program set up the right causal structures).
>
>Good to hear from you again, Jeff. I think that of these 3
>possibilities, only number 2. can work. As far as number 1, it's
>obvious that memorizing the Chinese Room rules would not allow someone
>to translate between English and Chinese, which you would expect a
>person who understood both English and Chinese to be able to do.

A good point.

BTW, A while back (maybe a couple of years by now) the discussion took
a turn where some people were arguing that the person in the Room
would learn Chinese.  Many of the familiar syntax vs semantics
arguments reappear in this context, and a natural question to ask
(I think) is: if the person in the Room can't learn Chinese, from
the information available, how is it that the Room system manages
to understand it?

I'm not presenting this as an argument against the systems reply,
but I think that (a) it's an interesting question, and (b) that
it helps show why it might be reasonable for someone who thought
the Room system understood Chinese to also think the person in the
Room could learn Chinese.

>As far as number 3. is concerned, I don't see how memorization sets up
>any particular "causal structure" if the rules aren't carried out.  To
>me, the only meaning to a program is a specification of a state
>machine; if the specification is not met, then you don't have the
>state machine any more than buying a blueprint will give you a house.

I was thinking of Chalmers-like arguments that a program specifies
a causal structure.  It may be that memorizing the program sets up
the state machine, in effect.  On the other hand, maybe what this
shows is that there's a defect in the "crumbly cake" argument.
What do you think?

-- jeff


