From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl Thu Jan 16 17:20:10 EST 1992
Article 2687 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence testing
Message-ID: <1992Jan14.015806.23985@oracorp.com>
Organization: ORA Corporation
Date: Tue, 14 Jan 1992 01:58:06 GMT

Jeff Dalton writes:

>>If the Chinese Room is possible, then it follows (assuming Searle is
>>correct, which I don't) that proper behavior without understanding is
>>possible.

>That Searle is willing to postulate something in order to 
>present an argument hardly shows he thinks it's actually the
>case. I don't see any reason to suppose Searle thinks the
>behavior is perfectly possible without intentionality.  
>Indeed, I seem to recall that he say the opposite.

Well, if he says the opposite, then he is simply wrong. As I have
already sketched, the mathematical existence of a finite state machine
capable of producing Chinese outputs that are indistinguishable from a
native speaker is not a matter of conjecture, it is simply a fact that
follows from the finiteness of human lifespans. Of course, existence
of a finite state machine in the mathematical sense does not imply
that humans will ever discover such a thing; there is also a
mathematical proof of the existence of a perfect chess algorithm, but
it may be practically impossible to build such a thing.

So, if someone wants to base an argument against AI on practical
considerations, I am quite sympathetic, there is no evidence that we
will ever get powerful enough computers to simulate human minds. But
if you are saying that it is in principle impossible, I think that you
need a better argument.

Daryl McCullough
ORA Corp.
Ithaca, NY



