From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:21:46 EST 1992
Article 2721 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <5982@skye.ed.ac.uk>
Date: 14 Jan 92 22:04:01 GMT
References: <1992Jan14.015806.23985@oracorp.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 33

In article <1992Jan14.015806.23985@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
>
>>>If the Chinese Room is possible, then it follows (assuming Searle is
>>>correct, which I don't) that proper behavior without understanding is
>>>possible.
>
>>That Searle is willing to postulate something in order to 
>>present an argument hardly shows he thinks it's actually the
>>case. I don't see any reason to suppose Searle thinks the
>>behavior is perfectly possible without intentionality.  
>>Indeed, I seem to recall that he say the opposite.
>
>Well, if he says the opposite, then he is simply wrong. As I have
>already sketched, the mathematical existence of a finite state machine
>capable of producing Chinese outputs that are indistinguishable from a
>native speaker is not a matter of conjecture, it is simply a fact that
>follows from the finiteness of human lifespans. 

A good example.  Because few would want to argue that such a 
machine must have understanding.

>So, if someone wants to base an argument against AI on practical
>considerations, I am quite sympathetic, there is no evidence that we
>will ever get powerful enough computers to simulate human minds. But
>if you are saying that it is in principle impossible, I think that you
>need a better argument.

*I* am not saying that.  What I am saying in this thread is that
Searle thinks the behavior is not possible without understanding.
Maybe I'm wrong, of course, and a relevant quote from Searle
would show that I am.  I will also look for such direct evidence
on this point and let you know if I find it.


