From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:42 EST 1992
Article 2644 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <5952@skye.ed.ac.uk>
Date: 10 Jan 92 20:15:51 GMT
References: <5826@skye.ed.ac.uk> <1991Dec11.180924.37884@spss.com> <5907@skye.ed.ac.uk> <1992Jan08.230618.31038@spss.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 70

In article <1992Jan08.230618.31038@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>To make things clearer let me clarify that I am attacking Searle on two
>points.  His arguments depend on two dubious assumptions, namely
>1. that a simulation of a mind is not a mind; and
>2. that computers are incapable of semantics.

I don't think his arguments employ these as _assumptions_ at all;
certainly not 2.

I'm beginning to think that the Sci Am article must be a very 
bad explanation of Searle's position...

>This is not to say it's necessarily wrong.  But until we have some proof
>one way or the other, the Chinese Room argument only proves that if you
>believe in "causal powers" you don't believe in strong AI, and so what?

What do you mean?  Instead of "causal powers" think "whatever
it is about the brain that lets it support intentionality/
understanding/etc".  

>But what does it prove when we add in the systems reply and the robot reply?
>I find Searle's response to this (p. 30) incredibly lame-- he imagines 
>the man in the room internalizing the entire program.  But what if this is
>simply impossible?  Perhaps the program is implemented as a neural network
>with 10 billion neurons.  It simply makes no sense to claim that the man
>in the room could memorize it-- and if he can't, there is no argument 
>against the systems reply.  

No argument that will convince you (and many others), at least.  

It should be clear that the system has to understand, somehow,
if there's going to be any understanding.  But does it understand?
I'm (usually) inclined to agree with you, that Searle has failed
to show there can't be understanding there, but not on the grounds
that a man couldn't memorize the program.

On the other hand, I find it hard to see how the robot reply adds
anything.  In any case, I've discussed the robot reply in other
messages and don't have anything new to add here.

>I wasn't referring to materialism.  Searle believes that mental phenomena
>are essentially physical processes-- that's why they can't be simulated.
>An alternative viewpoint is that they are informational or symbolic
>phenomena (and thus can be simulated).  This viewpoint is just as
>materialist as Searle's.

Searle doesn't say they can't be simulated, just that simulation
isn't good enough.

Moreover, Searle doesn't say that minds require exactly the
physical processes that occur in brians.  His other example
is green slime.  What he argues is that something doesn't
have a mind merely by instantiating the right program.
And no matter how the program gets going, there are going
to have to be some physical processes involved.

>Dualism certainly offers a nice distinction between syntax and semantics:
>semantics might only be possible for souls.  But Searle isn't a dualist,
>so this position is not available to him.

True.

>>So at low levels understanding is not present in humans.  That
>>hardly shows that at high levels it is present in computers.
>
>No, it just shows that reductive arguments apply as well to humans as to
>computers, and that systems with understanding (e.g. brains) can be built
>out of components without it.

Something that Searle has not questioned.


