From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:41 EST 1992
Article 2643 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Causes and Reasons
Keywords: reasons and causes
Message-ID: <5951@skye.ed.ac.uk>
Date: 10 Jan 92 19:55:32 GMT
References: <1991Dec15.120726.6592@husc3.harvard.edu> <45209@mimsy.umd.edu> <5896@skye.ed.ac.uk> <361@tdatirv.UUCP>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 45

In article <361@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <5896@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>|A more effective version of applying anti-AI arguments to
>|humans would be to say that (in Searle's case) if the argument
>|is correct it would also show that humans do not understand
>|(in the sense of understand indicated above); but humans do
>|understand; therefore Searle's argument must be incorrect.
>
>I thought that is what I said!

Perhaps so.  If I said implied otherwise, consider me corrected.

>|Please note that if this works it is just another way of showing
>|that Searle has failed to prove his conclusion.  It does nothing
>|whatsoever to show that computers would actually understand.
>
>I hope I have never claimed any more than that, at least as far as *proof*
>is concerned.  I have claimed that a reasonable extension of this point
>is that research into the possibility of computer 'understanding' is
>worthwhile.  [Of course this would also require a more careful definition
>of 'understanding'].

Sounds reasonable to me.

>|However, the Chinese Room can't be used in this way, because
>|what it would show is not that humans don't understand but
>|that they don't understand merely by virtue of running the
>|right program.  Searle has already applied his argument in
>|this way.  That's why he concludes that the brain must in
>|addition have sufficient "causal powers".
>
>But he has never shown why computers cannot have such 'causal powers'.
>Or even that the Chinese Room does not have 'causal powers'.  He has
>merely said 'it is intuitively obvious that the CR has no causal powers'.
>Well, not to me.

Well, since Searle says people are computers (though in a somwhat
odd sense), it looks like he's willing to allow that some computers
can have the required causal powers.  

He certainly does not say 'it is intuitively obvious that the CR has
no causal powers'.  For an explanation of the "causal powers" phrase
see other messages.

-- jeff


