From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!cs.utexas.edu!uunet!psinntp!scylla!daryl Thu Jan 16 17:19:18 EST 1992
Article 2603 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence testing
Message-ID: <1992Jan9.185619.1336@oracorp.com>
Organization: ORA Corporation
Date: Thu, 9 Jan 1992 18:56:19 GMT

Jeff Dalton writes:
 
>>> Searle and others also seem to think that the behaviour is perfectly
>>> possible without such processes (without 'real' intentionality,
>>> consciousness, thinking).

> Searle doesn't think that.  What is the evidence for this claim?

If the Chinese Room is possible, then it follows (assuming Searle is
correct, which I don't) that proper behavior without understanding is
possible. As to whether the Chinese Room is possible or not, it
depends on how picky you want to be. It certainly isn't *practically*
possible, because it would be far beyond human capabilities to
simulate a Chinese thinker in real time by following syntactic rules.

On the other hand, if you ignore such practical arguments, and assume
that computers can be made arbitrarily fast, and have arbitrarily much
memory, then it follows immediately that a computer could pass the
test for being able to converse in Chinese with the fluency of a
native. There are only a finite number of possible sensible
conversations in Chinese in the lifetime of a human being. The
computer could store all of these, and do a simple table look-up, as
someone (perhaps you) has pointed out in the past.

Daryl McCullough
ORA Corp.
301A Harris B. Dates Dr.
Ithaca, NY 14850-1313


