From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psinntp!scylla!daryl Tue Jan 21 09:26:35 EST 1992
Article 2823 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence testing
Message-ID: <1992Jan17.141234.9909@oracorp.com>
Organization: ORA Corporation
Date: Fri, 17 Jan 1992 14:12:34 GMT

Jeff Dalton writes: 

> ...one thing you wouldn't be right to conclude is that I think
> the behavior is possible without intentionality -- because (according
> to the supposition) what I actually think is the opposite.

> Another thing you wouldn't be right to conclude is that, because
> I think the opposite, the argument I offered doesn't show that we
> should reject the TT.  Perhaps, for instance, I am wrong in thinking
> that the behavior is impossible without intentionality.

Jeff, I am very puzzled by your statements. You seem to be saying that
you believe that

     1. (Intelligent, understanding, or whatever) behavior is not possible
        without intentionality.

     2. Behavior is not sufficient to indicate intentionality (rejection of
        the Turing Test).

Now, it seems to me that 1 and 2 are out-and-out logical
contradictions (they are negations of each other). If intentionality
is necessary for behavior, then behavior is sufficient to indicate
intentionality. Perhaps it is an issue with modalities, that is, you
believe 1, but you believe 2 is a possibility (that is, you believe
that you might be wrong about 1).  In any case, because 1 and 2 are
contradictory, to the extent that Searle's Chinese Room is an argument
in favor of 2, it is also an argument against 1.

Daryl McCullough
ORA Corp.
Ithaca, NY



