From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra Tue Jan 21 09:26:35 EST 1992
Article 2824 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra
>From: chandra@cannelloni.cis.ohio-state.edu (B Chandrasekaran)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <CHANDRA.92Jan17095634@cannelloni.cis.ohio-state.edu>
Date: 17 Jan 92 14:56:34 GMT
References: <1992Jan17.141234.9909@oracorp.com>
Sender: news@cis.ohio-state.edu (NETnews        )
Organization: Ohio State Computer Science
Lines: 35
In-Reply-To: daryl@oracorp.com's message of Fri, 17 Jan 1992 14: 12:34 GMT
Originator: chandra@cannelloni.cis.ohio-state.edu

In article <1992Jan17.141234.9909@oracorp.com> daryl@oracorp.com writes:

     Jeff Dalton writes: 

   > ...one thing you wouldn't be right to conclude is that I think
   > the behavior is possible without intentionality -- because (according
   > to the supposition) what I actually think is the opposite.

   > Another thing you wouldn't be right to conclude is that, because
   > I think the opposite, the argument I offered doesn't show that we
   > should reject the TT.  Perhaps, for instance, I am wrong in thinking
   > that the behavior is impossible without intentionality.

   Jeff, I am very puzzled by your statements. You seem to be saying that
   you believe that

	1. (Intelligent, understanding, or whatever) behavior is not possible
	   without intentionality.

	2. Behavior is not sufficient to indicate intentionality (rejection of
	   the Turing Test).

I think we should make a distinction between what Searle in general
might believe about what is possible, and what he specifically claims
to have shown in the CR articles.  In the latter, he claims to have
shown only that *even if he were to grant that human-like behavior is
possible for a properly programmed TM as a premise*, such a TM would
not have understanding.  From this we cannot conclude that Searle
really believes that such a TM is in fact possible.  Jeff's point
seems to be that Searle's premise of a TM which passes the
Chinese-understanding TT is only for the purpose of showing that such
a TM would still not "understand," not to indicate his belief that
such TM's are in fact possible.  But I think Jeff must grant that
Searle has not said in his CR papers that he thinks such a TM is
impossible because TM's cannot have intentionality.  


