From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:38 EST 1992
Article 2638 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <5946@skye.ed.ac.uk>
Date: 10 Jan 92 18:40:05 GMT
References: <1992Jan9.185619.1336@oracorp.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 18

In article <1992Jan9.185619.1336@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
> 
>>>> Searle and others also seem to think that the behaviour is perfectly
>>>> possible without such processes (without 'real' intentionality,
>>>> consciousness, thinking).
>
>> Searle doesn't think that.  What is the evidence for this claim?
>
>If the Chinese Room is possible, then it follows (assuming Searle is
>correct, which I don't) that proper behavior without understanding is
>possible.

That Searle is willing to postulate something in order to 
present an argument hardly shows he thinks it's actually the
case.  I don't see any reason to suppose Searle thinks the
behavior is perfectly possible without intentionality.  
Indeed, I seem to recall that he say the opposite.


