From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:16:43 EST 1992
Article 3072 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Jan23.204125.23190@aisb.ed.ac.uk>
Date: 23 Jan 92 20:41:25 GMT
References: <1992Jan22.195657.19911@bronze.ucs.indiana.edu> <1992Jan22.224344.7404@aisb.ed.ac.uk> <1992Jan22.235812.11080@bronze.ucs.indiana.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 148

In article <1992Jan22.235812.11080@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Jan22.224344.7404@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>I don't follow you.  Searle argues about whether things understand
>>merely by running the right program.  If the VP needs the "causal
>>powers" (if, we might say, Chinese Rooms understand only because
>>there's a person in them), then strong AI would still be in trouble,
>>even though Searle's argument might need adjusting.
>
>For Searle's argument to succeed, he has to exhibit an implementation
>of the given program that doesn't understand.  If it turns out that
>his own choice of implementation, for whatever reason, does understand,
>then the argument's no good.  Of course that doesn't imply that
>strong AI is automatically true, just that this particular argument
>doesn't work.  

Just so.  Hence the need for "adjusting".  (Whether there are
adjustments that work is another question.)

However, suppose that we do find out that the "virtual person"
has to be in someone's brain.  That very discovery would be bad
news for strong AI, aprat from any arguments of Searle's.

>Talking about the existence of conscious mental states is enough.
>"Person" is just shorthand.

Perhaps only persons can understand.  That is, maybe the shorthand
doesn't actually shorten.

In any case, there has to be some reason for us to conclude that 
there's some sort of virtual person, if we want to say anything
more than "maybe, somehow, there's a virtual person".

McDermott seems to think (if I recall correctly) that we can reason
thus: the same computational theory of mind that let us build the
"understanding program" would say there was a virtual person.  

This is, presumably, one of the reasons he wants to have Searle
assuming that Strong AI is true.  But eventually assumptions are
discharged, and then you have conditionals.  So all that can
actually be shown from this is

  if strong ai is right, then the Room understands

And that's just fine with Searle!  For suppose Searle's in the
middle of some argument that starts with an assumption that strong
ai is right.  Well, we know that Searle concludes the Room doesn't
understand.  So there's the contradiction: if strong ai is right,
then Room understands and and the Room doesn't understand.

What might be questioned in my account of this is "we know that Searle
concludes the Room doesn't understand".  Because "the Room" in the
above essentially means the system: the person, the rule books, the
scraps of paper, etc; and it might be supposed that Searle doesn't
say anything about _the system_ until he deals with the systems
reply.

Indeed, there may already be messages on the net flaming for this,
because I already used something like it in another article.

Nonetheless, I think there's an implicit premise (and maybe it's
even explicit in some of Searle's versions of his argument) that
if there's any understanding going on in the Chinese Room, it
must be because the person in the Room understands.  That's why
Searle thinks noting that the person in the room doesn't understand
is enough to show that instantiating or implementing the program
isn't enough for understanding.

Indeed, Searle writes things like (from the 2nd Reith Lecture):

  If you [the person in the Room] don't understand Chinese, 
  then no other computer could understand Chinese because no
  digital computer, just by virtue of running a program, has
  anything you don't have.

But even in this way of seeing the argument, the systems reply still
makes sense.  Because the systems reply says that the person in the
Room is not in a position to see any understanding.  They're just
looking at (and carrying out) these little mechanical processes,
and of course if you look at some little mechanical processes you
won't (or at least not merely by carrying them out) see anything
like understanding.  The systems reply can even refer back to the
first Reith Lecture, where Searle says:

  ... but it's hard to see how mere physical systems could have
  consciousness.  ...  How could this grey and white gook inside
  my skull be conscious?

and:

  How can this stuff inside my head be about anything? ... How,
  to put it crudely, can atoms in the void represent anything?

and:

  ... I can't each into this glass, pull out a molecule and say this
  one's wet.

    In exactly the same way, ..., though we can say of a particular
  brain, this brain is conscious or this brain is experiencing thirst
  or pain, we cannot say of any particular neuron, this neuron is in
  pain, this neuron is experiencing thirst.

So what does Searle say to that?

We may recall that Searle says some things about syntax vs semantics.
For instance (and now we're back in the 2nd Reith Lecture), he says
"programs are defined purely syntactically".  But when he addresses
the systems reply in the same lecture, he says something different:

  There's no way the system can get from syntax to semantics.
  I, as the CPU, have no way of figuring out what any of these
  symbols mean, but then neither does the whole system.

And earlier in the lecture:

  The rules specify the manipulations of the symbols purely
  formally, in terms of their syntax, not their semantics.
  So a rule might say: `Take a squiggle-squiggle sign out of
  basket number one and put it next to a squoggle-squoggle 
  sign from basket number two.'

  ...

  There you are, locked in your room, shuffling your Chinese symbols,
  ...  On the basis of the situation as I have described it, there's
  no way you could learn any Chinese simply be manipulating these
  formal symbols.

These passages give us a new point of emphasis.  The program is
defined syntactically, but so are the symbols.  The symbols are
manipulated as meaningless shapes.  Moreover, the person in the
room can't figure out what they mean.  It's hard to see how the
Chalmers argument, that programs specify a causal structure,
addresses this point.

On the other hand, Searle still hasn't really addressed the
systems reply.  If the person can't figure out what the symbols
mean, then the person plus some pieces of paper, baskets, etc
can either.  I think we should agree with that.  But this is
still looking at it from the point of view of the CPU.  The
program is one for conversation in Chinese, not one for figuring
out what meaningless symbols mean; so it seems reasonable to
say it isn't doing what the person in the room is doing; it
isn't trying to figure out what all these strange symbols could
possibly mean.

-- jd


