From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:59 EST 1992
Article 2673 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person? (was re: Searle and the Chine
Keywords: personal identity, searle
Message-ID: <5965@skye.ed.ac.uk>
Date: 13 Jan 92 21:28:22 GMT
References: <!!5q-0+@rpi.edu> <335@tdatirv.UUCP> <5894@skye.ed.ac.uk> <1992Jan10.213635.1884@cs.yale.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 114

In article <1992Jan10.213635.1884@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>
>  In article <5894@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>  >In article <335@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:

>  >>I consider semantics to be a necessary precondition to producing
>  >>a convincing, unrestricted dialog with human beings.
>  >
>  >But again, this is assuming what's to be proved.
>
>Mr. Dalton has it got it exactly turned inside out, as Daryl
>McCullough explains very clearly in his reply:

I disagree.  The claims I an considering, and calling question-
begging, are ones like "semantics is necessary for convincing
dialog", ie claims like: "whenever there is convincing dialog, 
there is semantics".

If it were the case that anything with the right behavior 
had semantics (or intentionality or understanding or whatever
we take the property at issue to be), then the Chinese Room
would understand -- because it has the right behavior.

Searle presents an argument that the CR doesn't have semantics.
Maybe it's not a very good argument, but it's an argument.
(Ok, maybe it's only an intuition pump, as Dennett suggests,
but it's presented as an argument and we can treat it as one.)

If you can _show_ that anything with the right behavior has
semantics, then you have shown that Searle is wrong.  However,
if you merely assert it, you have shown nothing.  Indeed, if 
Searle is correct (and merely asserting as above does nothing
to show he is not), the Chinese Room is an example of something
that has the right behavior but no semantics.  So Searle's
Chinese Room argument can also serve as an argument that
the claim "anything with the right behavior has semantics"
is false.

So I really don't see how anyone can present that claim as
an argument against Searle.  Some people think the claim must
be true, and hence that Searle must be wrong, but again that is
not much of an argument.  Indeed, several people (including me)
have offered examples to try to show how one might make sense
of the idea that something might have the right behavior and
yet not not know what it was talking about (so to speak), in
order to weaken the belief that the claim just must be true.

Note too that Searle has not assumed that anything with the
right behavior has semantics.  So the idea that I have it
exactly backwards seems a little strange.  Indeed, when I
look at the stuff quoted below, it seems to be about a different
issue, namely that of whether the systems reply is begging
the question.  No doubt I invited this confusion by writing:

  But what is your reason for thinking it does have the attributes
  of personhod?  Its behavior, just as Searle pointed out when
  accusing the system-reply of question begging.

That _is_ when Searle point this out (so far as I recall).
But saying that is not saying that the systems reply is 
question begging.  Indeed, I wrote the above in reply to

  Really, I see no reason to deny the Chinsese Room any of the attributes
  of personhood.

and it was the question-begging status of claims like _that_
which I was considering.

But let's turn to the arguments about the systems reply anyway.

>  >>  There is some confusion as to what exactly Searle thinks he is doing.
>  ...
>  If Searle claims that he can show that Strong AI is
>  nonsense, he has to show that assuming Strong AI leads to a
>  contradiction (or at least to an absurdity), and he hasn't done so. By
>  claiming that the systems reply is begging the question, Searle is
>  essentially saying "Only someone who already believes in Strong AI
>  would believe in the systems reply". So what? If Searle believes he
>  can show that Strong AI is nonsense, then he can certainly show that
>  Strong AI plus the Systems Reply is nonsense. However, it is circular
>  reasoning on the part of Searle if his argument is:
>
>       1. Strong AI is nonsense, because the Systems Reply is nonsense.
>       2. The Systems Reply is nonsense because it depends on Strong AI,
>	  which is nonsense.

Yes, it that _were_ Searle's arguement, then he would be employing
circular reasoning.  So what?

>To put it another way: Searle wishes to show that it is absurd to
>believe in Strong AI.  He does so by assuming Strong AI is true and
>then drawing all sorts of conclusions that don't actually follow.
>Having set up this smokescreen, he then turns around and says, What's
>your evidence for Strong AI?  By this time the gullible have forgotten
>that *he assumed it was true* to begin with.

I find it difficult to recognize Searle's Chinese Room argument
in your description.  He starts by assuming that it's possible to
get the right behavior by running the right program.  But this is
just to make room for testing the claims of Strong AI.

According to Strong AI (or perhaps it's Searle's version of Strong
AI), running the right program is enough to get semantics.  But, he
argues, it doesn't get semantics in this case.  This is assuming the
truth of Strong AI only in the sense that "if A then B" assums the
truth of A.  (The A -> B is "if Strong AI is right, running the right
program will result in semantics".)

So perhaps you're thinking of some other argument of Searle's.
If so, I'm not sure why it's relevant.  The systems reply is a
reply to the Chinese Room; and you at least seem to think we're
considering the systems reply.

-- jd


