From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!news.netmbx.de!unido!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:55:08 EST 1992
Article 4409 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!news.netmbx.de!unido!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <6388@skye.ed.ac.uk>
Date: 11 Mar 92 18:19:50 GMT
References: <1992Mar6.194405.22939@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 51

In article <1992Mar6.194405.22939@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>christo@psych.toronto.edu (Christopher Green) writes:
>
>[About the claim that there may be more than one system in one
>physical body]
>
>> If you really want your argument to rely wholly on the very dubious
>> assumption that there are, somehow, two minds running around inside
>> the man's head, feel free, but the utter tendentiousness of the claim
>> is patently obvious to everyone not committed a priori to the belief
>> that computers JUST GOTTA have minds.  In short, its nothing short of
>> an ad hoc shoring up of a failing research program...
>
>In my opinion, nothing could be farther from the truth. There is
>nothing ad hoc about the claim that there could be several minds in
>one head; it is a *necessary* consequence of (a) the Strong AI
>position, and (b) the assumption that the man has memorized the
>Chinese Room program.


It certainly seemed to arrive in a rather ad hoc way.  There's 
Searle's original argument, the systems reply, Searle's memorization
reply to that, and then finally up pops this idea that there would
be two minds...

On the other hand, I think you're right in saying it's a necessary
consequence of (a) Strong AI (anything that runs the right program has
a mind), plus some other things.  But what other things?  Well, your
(b) is one.  But we still need something to say which of the following
would obtain:

  1. The person in the Room (not some second person) would
     understand Chinese.

  2. A second person would be created and would continue to
     exist so long as the person in the Room continued to follow
     the memorized rules.

  3. A second person would be created and would persist no matter
     what the original person did (perhaps because memorizing the
     program set up the right causal structures).

  and perhaps others

In any case, how do we know a second person would exist?  It's not
because we can look at the computational theory of mind that let us
construct the program and (because it tells us what a mind is, so
to speak) see that a second mind would be created.  There's no such
theory (at least not yet).

-- jd


