From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Apr  7 23:22:08 EDT 1992
Article 4705 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <6516@skye.ed.ac.uk>
Date: 24 Mar 92 19:13:45 GMT
References: <1992Mar6.194405.22939@oracorp.com> <6388@skye.ed.ac.uk> <centaur.700370638@cc.gatech.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 103

In article <centaur.700370638@cc.gatech.edu> centaur@terminus.gatech.edu (Anthony G. Francis) writes:

[in reponse to me (JD)]

>As a new thought experiment, let's consider the "memorized Intel Window" 
>problem. Tell me, if I run Soft PC on my girlfriend's Macintosh, does 
>her operating system "understand" DOS binaries written in Intel 80x86 
>machine language? No. But something in the Macintosh does, and _that_ 
>system behaves in such a way that it appears to understand 80x86 code. 
>Give it a set of squiggles (in the guise of a DOS program, loaded into 
>Soft PC) and it can produce the correct set of squoggles (input and 
>output). But the underlying chip does _not_ understand DOS code.

This is irrelevant, because it's using a metaphorical sense of
"understand" that's not the sense of "understand" involved in the
Chinese Room argument.  Now, if you want to show that it really is
the same sense of understand, that might accomplish something.
Merely giving this example does not.

BTW, one of the odd things about this debate is that so many
people on the systems reply side seem to think that the people
on the other side have never heard of these ideas.  But I don't
think there's much disagreement about whether we can find more
than one system in the various versions of the Chinese Room.
The disagreement is about whether certain systems understand
Chinese, not about whether they exist as systems.  (There's also
a dispute about whether the the systems are persons.)

Re: memorization reply.

>           Before, the man was a simple interpreter operating on
>the program external to himself, like a chip running a program in
>secondary memory. In the Memorization reply, the man acts as if he was
>a chip running a process within its own memory. _There is no difference_.

Why do you think this is a point _against_ Searle?  Searle thinks that
if he can show there's no understanding in one case, he's also shown
it for another.  And if the cases are equivalent (no difference), then
he's right.

Re: my (JD's) response to Daryl McCullough:

>>On the other hand, I think you're right in saying it's a necessary
>>consequence of (a) Strong AI (anything that runs the right program has
>>a mind), plus some other things.  But what other things?  Well, your
>>(b) is one.  But we still need something to say which of the following
>>would obtain:
>
>>  1. The person in the Room (not some second person) would
>>     understand Chinese.
>No, the person in the room would not understand Chinese. The room does.
>The Macintosh does not understand DOS binaries. The system (Mac executing
>SoftPC program) does.
>
>>  2. A second person would be created and would continue to
>>     exist so long as the person in the Room continued to follow
>>     the memorized rules.
>It's tricky to call the CR program a person, because it is not normally
>described in very person-like terms. But, as long as the man followed
>the rules, the correct behavior would be produced, and the "second
>person" would exist.
>
>>  3. A second person would be created and would persist no matter
>>     what the original person did (perhaps because memorizing the
>>     program set up the right causal structures).
>Tricky. Does the virtual IBM PC exist when the SoftPC disk lies on the shelf?
>When the system has frozen the IBM task? When the task has terminated but
>is not in memory?

Bear in mind that my aim in listing these possibilities was to
indicate why something more than Darly's (a) and (b) was needed:
something had to say which of the possibilities would be the case
(and it was my view that (a) and (b) weren't sufficient).

>>In any case, how do we know a second person would exist?  It's not
>>because we can look at the computational theory of mind that let us
>>construct the program and (because it tells us what a mind is, so
>>to speak) see that a second mind would be created.  There's no such
>>theory (at least not yet).
>
>_By Searle's definition_, the program that runs the CR has the behavioral
>characteristics that match what we can observe as a mind;

Ok, for external behavior.  (Eg, Searle doesn't say whether or not
the program ever thinks anything to itself, eg "what a pointless
conversation this is".)

>                                                           therefore,
>anywhere it runs correctly, it produces mindlike behavior. 

Ok, for external behavior.

>                                                           The matter of
>defining whether its behavior "produces understanding," and thus whether
>something should be called a mind if it behaves like a human mind yet has 
>a different underlying architecture than a human brain, is the issue at hand.

Do you really want to say the _behavior_ produces understanding?
Or that we should answer the question by _definition_.

In any case, it looks like we're more or less agreeing at this point.

-- jd


