From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!munnari.oz.au!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:21:47 EST 1992
Article 2723 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!munnari.oz.au!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <5984@skye.ed.ac.uk>
Date: 14 Jan 92 22:17:18 GMT
Article-I.D.: skye.5984
References: <5907@skye.ed.ac.uk> <1992Jan08.230618.31038@spss.com> <5952@skye.ed.ac.uk> <1992Jan13.200632.36402@spss.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 40

In article <1992Jan13.200632.36402@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In <5952@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes (quoting me):
>>>But until we have some proof
>>>one way or the other, the Chinese Room argument only proves that if you
>>>believe in "causal powers" you don't believe in strong AI, and so what?
>>
>>What do you mean?  Instead of "causal powers" think "whatever
>>it is about the brain that lets it support intentionality/
>>understanding/etc".  
>
>That's not the same thing.  What allows the brain to support understanding
>might conceivably be its information-handling capabilities-- that is, 
>something that can be duplicated rather than simulated on a computer.

But Searle thinks he has shown it isn't that.  It's _something else_
about the brain.  The something else that matters is the causal powers.

>By "causal powers" Searle presumably means "things about the brain that
>support understanding AND are physical so an algorithm can't have any."
>And the fact that if you postulate the existence of such things you come
>to Searle's conclusions is neither surprising nor very significant.

This is becoming pointless.  It may be possible to get every detail
of an explanation of Searle's "causal powers" right, so that no new
misunderstanding can occur, but I'm going to give up.

For the last time, Searle doesn't assume some mysterious "causal
powers" exist and use that to reach his conclusion that computers
do not understand.  He may do lots of other dubious things, but
that isn't one of them.

>>It should be clear that the system has to understand, somehow,
>>if there's going to be any understanding.  But does it understand?
>>I'm (usually) inclined to agree with you, that Searle has failed
>>to show there can't be understanding there, but not on the grounds
>>that a man couldn't memorize the program.
>
>Wow!  A point of agreement!

Yes.  And I'd rather stop there then produce new disagreements.


