From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:33:50 EST 1992
Article 2529 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Causes and Reasons
Keywords: reasons and causes
Message-ID: <5896@skye.ed.ac.uk>
Date: 7 Jan 92 20:35:15 GMT
References: <1991Dec15.120726.6592@husc3.harvard.edu> <45209@mimsy.umd.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 42

In article <45209@mimsy.umd.edu> kohout@cs.umd.edu (Robert Kohout) writes:

>                       If, on the other hand, you mean to imply
>some a priori semantic reality which exists independent of any observer,
>and which cannot be captured by syntactic manipulations, don't you need also
>show that humans have somehow tapped into this? Or can we safely follow
>Searle and say that humans obviously have _it_ , whatever it may be, and
>syntax just as obviously doesn't? It just isn't that obvious to me.

One of the clever ideas behind the Chinese Room is to use
understanding Chinese rather than "tapping into some a priori
semantic reality" as the thing we're looking for.

Some people have been claiming that all of the anti-AI arguments
might also apply to humans.  But Searle's conclusion is that
computers can't (merely by running the right program) understand
a language in the very sense that humans do understand one.

Some people understand Chinese.  What Searle claims to have
shown is that computers can't understand Chinese *in the very
same sense of understand*.  Since that understanding is
specifically something that people do, we can all safely
follow Searle and say that humans obviously "have it".

A more effective version of applying anti-AI arguments to
humans would be to say that (in Searle's case) if the argument
is correct it would also show that humans do not understand
(in the sense of understand indicated above); but humans do
understand; therefore Searle's argument must be incorrect.

Please note that if this works it is just another way of showing
that Searle has failed to prove his conclusion.  It does nothing
whatsoever to show that computers would actually understand.

However, the Chinese Room can't be used in this way, because
what it would show is not that humans don't understand but
that they don't understand merely by virtue of running the
right program.  Searle has already applied his argument in
this way.  That's why he concludes that the brain must in
addition have sufficient "causal powers".

-- jeff


