From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:26:45 EST 1992
Article 2842 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Keywords: personal identity, searle
Message-ID: <6007@skye.ed.ac.uk>
Date: 17 Jan 92 20:28:50 GMT
References: <5894@skye.ed.ac.uk> <1992Jan10.213635.1884@cs.yale.edu> <5965@skye.ed.ac.uk> <1992Jan16.040733.23764@cs.yale.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 128

In article <1992Jan16.040733.23764@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>
>  In article <5965@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>  >[Daryl McCullough had written:]

>  [And finally back to Dalton:]
>  >I find it difficult to recognize Searle's Chinese Room argument
>  >in your description.  He starts by assuming that it's possible to
>  >get the right behavior by running the right program.  But this is
>  >just to make room for testing the claims of Strong AI.
>
>It is devilishly hard to extract Searle's argument in a clear form.
>But if we take a look at the Boden reprint of Searle's BBS article, we
>find this:

I take it that we have all read this article and perhaps other things
as well, such as his Reith Lectures.  Searle says a lot of things in
such articles, not all directly part of his arguments, and it's
easy to find things to pick on.  But I think we owe it to ourselves
to address the best version of his argument and to see if we can
make sense of it even when Searle states it in a confused or confusing
way.

I think I have some understanding of what Searle's argument is, but in
any given News article I could well get something wrong.  I still find
it difficult to recognize Searle's argument in your description, but
I may have failed to put my finger on the actual problem with your
version.

If you just said "Searle assumes Strong AI is true in his
argument about the Chinese Room", I might go along with you.
Nor will you find me disagreeing with Dave Chalmers version

  Premise 1: If strong AI is true, then there exists a program P such
  that implementing P is sufficient for mentality.

  Premise 2: Any program P can be implemented Chinese-room style,
  without being accompanied by mentality.

  Conclusion: Strong AI is false.

at least as a starting point.

On the other hand, I do not agree that Searle's argument is:

  1. Strong AI is nonsense, because the Systems Reply is nonsense.
  2. The Systems Reply is nonsense because it depends on Strong AI,
      which is nonsense.

Now, the way we got here is that Stanley Friesen wrote "he saw no

reason to deny the Chinese Room any of the attributes of personhood",
and:

  I consider semantics to be a necessary precondition to producing
  a convincing, unrestricted dialog with human beings.

I said this was begging the question.  To which you and Daryl
McCullough replied: oh no, you have it backwards -- it's Searle
who's begging the question.

I sent a long reply to this, which you are now answering.  In this
reply, you address only the end of my article, and not the part where
I explain what it was that I thought was question- begging.  Can I
assume that you agree with the earlier part?  Or do you somehow think
the later part shows I was wrong throughout?

To clarify, let me say that the systems reply is often stated as "the
system understands".  Indeed, Searle states it that way in at least
one of his articles, and so at least in that case that's the claim
he's answering.  Moreover, so far as the Chinese Room argument is
concerned, it is begging the question.  If you state the systems reply
in some other way, such as "Searle has shown only that the CPU doesn't
understand; what he has to show is that the system as a whole doesn't
understand", then it is not begging the question.

A further source of confusion would be if you agreed with Daryl
McCullough that

  Strong AI is simply the claim that a machine with the right
  behavior must, therefore understand,

Daryl gets there by supposing that a program is just a specification
of behavior, and so the AI program that leads to understanding
would be any program that specified "understanding behavior".

If that's really what strong AI is supposed to be, it's not
something I've heard before.  In any case, if Searle _is_
assuming that Strong AI is true, I do not think he's assuming
that version of it.

>  >>  If Searle claims that he can show that Strong AI is
>  >>  nonsense, he has to show that assuming Strong AI leads to a
>  >>  contradiction (or at least to an absurdity), and he hasn't 
>  >>  done so.

Strong AI would say the Chinese Room understands, Searle's argument
that it doesn't.  This argument (before he addresses the various
replies) may be wrong, but it's not circular. 

>  >>  by claiming that the systems reply is begging the question, Searle is
>  >>  essentially saying "Only someone who already believes in Strong AI
>  >>  would believe in the systems reply".

A lot is packed into this "essentially".  I don't agree that this
is "essentially" what Searle is saying at all.  But Daryl would,
because he thinks:

   Strong AI is simply the claim that a machine with the right
   behavior must, therefore understand, which is logically equivalent
   to the claim that "correct behavior is not possible without
   understanding".

>I don't see how to continue, unless we shift the meaning of "strong
>AI" from his original statement to some version of Turing Syndrome, a
>hypothetical malady in which the sufferer swears undying faithfulness
>to the idea that passing Turing's Test is criterial evidence for
>intelligence.  

But isn't that exactly what Daryl does in article <1992Jan16.054716.
14332@oracorp.com>, where he writes:

   Strong AI is simply the claim that a machine with the right
   behavior must, therefore understand, which is logically equivalent
   to the claim that "correct behavior is not possible without
   understanding".

-- jd


