From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!mips!swrinde!gatech!cc.gatech.edu!terminus!centaur Tue Mar 24 09:55:19 EST 1992
Article 4424 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!mips!swrinde!gatech!cc.gatech.edu!terminus!centaur
>From: centaur@terminus.gatech.edu (Anthony G. Francis)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <centaur.700370638@cc.gatech.edu>
Date: 12 Mar 92 03:23:58 GMT
References: <1992Mar6.194405.22939@oracorp.com> <6388@skye.ed.ac.uk>
Sender: news@cc.gatech.edu
Organization: Georgia Tech College of Computing
Lines: 152

jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>In article <1992Mar6.194405.22939@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>>christo@psych.toronto.edu (Christopher Green) writes:
>>
>>[About the claim that there may be more than one system in one
>>physical body]
>>
>>> If you really want your argument to rely wholly on the very dubious
>>> assumption that there are, somehow, two minds running around inside
>>> the man's head, feel free, but the utter tendentiousness of the claim
>>> is patently obvious to everyone not committed a priori to the belief
>>> that computers JUST GOTTA have minds.  In short, its nothing short of
>>> an ad hoc shoring up of a failing research program...
>>In my opinion, nothing could be farther from the truth. There is
>>nothing ad hoc about the claim that there could be several minds in
>>one head; it is a *necessary* consequence of (a) the Strong AI
>>position, and (b) the assumption that the man has memorized the
>>Chinese Room program.
>It certainly seemed to arrive in a rather ad hoc way.  There's 
>Searle's original argument, the systems reply, Searle's memorization
>reply to that, and then finally up pops this idea that there would
>be two minds...

Where is this `pops' coming from? I don't see any `popping up of an idea'
when Searle gave his memorization reply, any more than I see the 
memorization reply as an ad hoc shoring up of a failing thought experiment.
Actually, I _do_ see the memorization reply as an ad hoc shoring up of
a failing thought experiment, primarily because I thought up the Systems
Reply independently.

My main problem with memorization response is that it fails to distinguish
between the man as a system and the CR program as a system. The distinction is
a fairly clear one, in terms of computer science; it corresponds precisely to
the notion of virtual machines.

As a new thought experiment, let's consider the "memorized Intel Window" 
problem. Tell me, if I run Soft PC on my girlfriend's Macintosh, does 
her operating system "understand" DOS binaries written in Intel 80x86 
machine language? No. But something in the Macintosh does, and _that_ 
system behaves in such a way that it appears to understand 80x86 code. 
Give it a set of squiggles (in the guise of a DOS program, loaded into 
Soft PC) and it can produce the correct set of squoggles (input and 
output). But the underlying chip does _not_ understand DOS code.

You might claim that the Macintosh does understand DOS programs, given
this extension; in one sense, it is true. But then, you've _agreed_ to 
the systems reply, because in a very real sense, the Macintosh corresponds
to the whole Chinese Room, man (Mac OS), CR program (SoftPC program) and 
physical room (the sleek Mac LC box). Yes, the whole room understands Intel
binaries, but the Mac OS does not. If you give it _in its expected sensory
modality_ a DOS program, it won't be able to read it; however, it can
execute the SoftPC program, which can read the DOS binary and "pretend"
that it is an Intel machine. You can load programs and run them, and 
receive the correct behavior, but no matter how much you do, the 
basic Mac OS does _not_ know DOS, and will tell you so if you give it
DOS input. The Intel machine running in that SoftPC window is "virtual";
it doesen't "really exist", it just behaves like it does.

Similarly, in the "memorized Chinese Room" problem, the Level 0 machine
(the man) has stored the code for a Level 1 machine (the Chinese Room
Program). The man can still operate on Level 0; for instance, he can
answer questions, such as the simple and obvious question: "Do you
understand Chinese?" with its equally simple and obvious reply:
"Hey, man, I'm just a ME taking Intro Psych. Let me out of this room, 
give me my ten points extra credit for participating in this dumb 
experiment, and let me go home." However, the man is capable of
interpreting his stored code for the Level 1 machine, given certain
protocols for input (cards with squiggles, perhaps asking "Do you
understand Chinese?" _in_ Chinese) and output (more cards with 
squiggles, which, perhaps, might read in Chinese, "Fluently.")

In a very real sense, in the memorization response Searle has simply
taken the cover off of the Macintosh and said "See? The system _still_
says that it doesn't understand." Changing the packaging (from the
room to the man) doesn't change the nature of the virtual machine
involved, although the physical location and means of execution of
the Level 1 virtual machine have been misleadingly changed in a way
that _by his own definition_ does not affect the Level 1 machine's
processing. Before, the man was a simple interpreter operating on
the program external to himself, like a chip running a program in
secondary memory. In the Memorization reply, the man acts as if he was
a chip running a process within its own memory. _There is no difference_.

>On the other hand, I think you're right in saying it's a necessary
>consequence of (a) Strong AI (anything that runs the right program has
>a mind), plus some other things.  But what other things?  Well, your
>(b) is one.  But we still need something to say which of the following
>would obtain:

>  1. The person in the Room (not some second person) would
>     understand Chinese.
No, the person in the room would not understand Chinese. The room does.
The Macintosh does not understand DOS binaries. The system (Mac executing
SoftPC program) does.

>  2. A second person would be created and would continue to
>     exist so long as the person in the Room continued to follow
>     the memorized rules.
It's tricky to call the CR program a person, because it is not normally
described in very person-like terms. But, as long as the man followed
the rules, the correct behavior would be produced, and the "second
person" would exist.

>  3. A second person would be created and would persist no matter
>     what the original person did (perhaps because memorizing the
>     program set up the right causal structures).
Tricky. Does the virtual IBM PC exist when the SoftPC disk lies on the shelf?
When the system has frozen the IBM task? When the task has terminated but
is not in memory?

>  and perhaps others
One thing that bothers many people is that computers can act in any fashion;
the structure does not determine the behavior, and therefore the behavior
is "meaningless". The SoftPC window is the same; it need not act like an IBM,
and yet it does, in every way meaningful to a program running within it.
The SoftPC program has _constraints_ on it which make it behave like an IBM;
current computer models have _constraints_ on them that make them behave like
people, even though the underlying hardware does not require us to build
our model in such a fashion. Some have said that an "artificial neural net,
but not one like we have now" might produce intelligence; however, if the
behavior of those neurons could be specified then they could be entirely
simulated by any good general-purpose Turing machine, translated into
squoggles, and given to our 1,000,000,000 IQ college sophomore in the 
Chinese Room to memorize and execute.

Is there anything wrong with some basic premise of the Memorization Reply,
or is it just me?

>In any case, how do we know a second person would exist?  It's not
>because we can look at the computational theory of mind that let us
>construct the program and (because it tells us what a mind is, so
>to speak) see that a second mind would be created.  There's no such
>theory (at least not yet).

_By Searle's definition_, the program that runs the CR has the behavioral
characteristics that match what we can observe as a mind; therefore,
anywhere it runs correctly, it produces mindlike behavior. The matter of
defining whether its behavior "produces understanding," and thus whether
something should be called a mind if it behaves like a human mind yet has 
a different underlying architecture than a human brain, is the issue at hand.

>-- jd
-Anthony Francis
--
Anthony G. Francis, Jr.  - Georgia Tech {Atl.,GA 30332}
Internet Mail Address: 	 - centaur@cc.gatech.edu
UUCP Address:		 - ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!gt4864b
-------------------------------Quote of the post------------------------------ 
"Cerebus doesn't love you ... Cerebus just wants all your money" 
		- Cerebus the Aardvark, from a _Church and State_ T-shirt
------------------------------------------------------------------------------


