From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!ncar!uchinews!spssig!markrose Thu Jan 16 17:19:59 EST 1992
Article 2672 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!ncar!uchinews!spssig!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <1992Jan13.200632.36402@spss.com>
Date: 13 Jan 92 20:06:32 GMT
References: <5907@skye.ed.ac.uk> <1992Jan08.230618.31038@spss.com> <5952@skye.ed.ac.uk>
Organization: SPSS, Inc.
Lines: 33
Nntp-Posting-Host: spssrs7.spss.com

In <5952@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes (quoting me):
>>But until we have some proof
>>one way or the other, the Chinese Room argument only proves that if you
>>believe in "causal powers" you don't believe in strong AI, and so what?
>
>What do you mean?  Instead of "causal powers" think "whatever
>it is about the brain that lets it support intentionality/
>understanding/etc".  

That's not the same thing.  What allows the brain to support understanding
might conceivably be its information-handling capabilities-- that is, 
something that can be duplicated rather than simulated on a computer.

By "causal powers" Searle presumably means "things about the brain that
support understanding AND are physical so an algorithm can't have any."
And the fact that if you postulate the existence of such things you come
to Searle's conclusions is neither surprising nor very significant.

>It should be clear that the system has to understand, somehow,
>if there's going to be any understanding.  But does it understand?
>I'm (usually) inclined to agree with you, that Searle has failed
>to show there can't be understanding there, but not on the grounds
>that a man couldn't memorize the program.

Wow!  A point of agreement!  Still, what's wrong with the memorization
reply?  (It's not mine, by the way; it's elaborated further by William
Poundstone.)

>Searle doesn't say they [minds] can't be simulated, just that simulation
>isn't good enough.

Sorry, sloppy writing on my part.  You can simulate anything; the question
is whether the simulation is also an instance of the thing simulated.


