From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima Thu Jan  9 10:34:28 EST 1992
Article 2591 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Causes and Reasons
Keywords: reasons and causes
Message-ID: <361@tdatirv.UUCP>
Date: 8 Jan 92 23:42:18 GMT
References: <1991Dec15.120726.6592@husc3.harvard.edu> <45209@mimsy.umd.edu> <5896@skye.ed.ac.uk>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 37

In article <5896@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
|A more effective version of applying anti-AI arguments to
|humans would be to say that (in Searle's case) if the argument
|is correct it would also show that humans do not understand
|(in the sense of understand indicated above); but humans do
|understand; therefore Searle's argument must be incorrect.

I thought that is what I said!
That is certainly my point when *I* say that an argument can apply equally
well to humans.  I most assuredly do *not* mean to deny that humans have
this evanescent quality called understanding..

|Please note that if this works it is just another way of showing
|that Searle has failed to prove his conclusion.  It does nothing
|whatsoever to show that computers would actually understand.

I hope I have never claimed any more than that, at least as far as *proof*
is concerned.  I have claimed that a reasonable extension of this point
is that research into the possibility of computer 'understanding' is
worthwhile.  [Of course this would also require a more careful definition
of 'understanding'].

|However, the Chinese Room can't be used in this way, because
|what it would show is not that humans don't understand but
|that they don't understand merely by virtue of running the
|right program.  Searle has already applied his argument in
|this way.  That's why he concludes that the brain must in
|addition have sufficient "causal powers".

But he has never shown why computers cannot have such 'causal powers'.
Or even that the Chinese Room does not have 'causal powers'.  He has
merely said 'it is intuitively obvious that the CR has no causal powers'.
Well, not to me.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



