From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:27:35 EST 1992
Article 2936 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <6032@skye.ed.ac.uk>
Date: 21 Jan 92 01:23:47 GMT
References: <1992Jan17.141234.9909@oracorp.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 54

In article <1992Jan17.141234.9909@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes: 
>
>> ...one thing you wouldn't be right to conclude is that I think
>> the behavior is possible without intentionality -- because (according
>> to the supposition) what I actually think is the opposite.
>
>> Another thing you wouldn't be right to conclude is that, because
>> I think the opposite, the argument I offered doesn't show that we
>> should reject the TT.  Perhaps, for instance, I am wrong in thinking
>> that the behavior is impossible without intentionality.
>
>Jeff, I am very puzzled by your statements. You seem to be saying that
>you believe that
>
>     1. (Intelligent, understanding, or whatever) behavior is not possible
>        without intentionality.
>
>     2. Behavior is not sufficient to indicate intentionality (rejection of
>        the Turing Test).
>
>Now, it seems to me that 1 and 2 are out-and-out logical
>contradictions (they are negations of each other). 

Yes, and that's why if a machine with the right behavior came along,
and I were convinced by Searle that it didn't have intentionality, and
I believed 1, I'd have to change my mind.

One thing to note is that it's a conflict between an argument (plus a
instance) that shows that 2, and a belief (perhaps with no good
arguments to back it up) that 1.

>If intentionality
>is necessary for behavior, then behavior is sufficient to indicate
>intentionality. Perhaps it is an issue with modalities, that is, you
>believe 1, but you believe 2 is a possibility (that is, you believe
>that you might be wrong about 1).  In any case, because 1 and 2 are
>contradictory, to the extent that Searle's Chinese Room is an argument
>in favor of 2, it is also an argument against 1.

But not directly.

Indeed, it might be that

  (a) Searle's argument shows that understanding is not produced
      merely by running the right program.

  (b) Machines never manage to produce the right behavior.

  (c) Some other entities with the right behavior come along, they
      don't understand either, yet (a) does not apply to them.

So we end up knowing (2), and hence that (1) is false, but without any
help from Searle.


