From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!uchinews!spssig!markrose Thu Jan  9 10:34:21 EST 1992
Article 2581 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!uchinews!spssig!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <1992Jan08.230618.31038@spss.com>
Date: 8 Jan 92 23:06:18 GMT
References: <5826@skye.ed.ac.uk> <1991Dec11.180924.37884@spss.com> <5907@skye.ed.ac.uk>
Organization: SPSS, Inc.
Lines: 87
Nntp-Posting-Host: spssrs7.spss.com

To make things clearer let me clarify that I am attacking Searle on two
points.  His arguments depend on two dubious assumptions, namely
1. that a simulation of a mind is not a mind; and
2. that computers are incapable of semantics.

Text preceded by > is Jeff Dalton; by >> is me.

1. (simulations)

To me the heart of Searle's argument is the claim that a simulation of a
mind is not a mind.  If this claim is not true, then his argument falls apart.
(To be precise, his axioms 1 and 3 would fail.)  That is why it is worth
examining this notion of when something is a simulation and when it's
the real thing.

>>There are really two categories of things: A) Things which can only be 
>>simulated by a program, such as oxidation, digestion, weather; B) Things it
>>can directly manipulate, such as money, ASCII text, numbers, or other
>>algorithms.  The question is, do meaning, perceptions, thoughts, and 
>>understanding belong to category A or B?

In terms of these categories, I ask: what makes Searle think thought
falls into category B rather than A?  So far as I can see it's a mere
metaphysical prejudice of Searle's.   

This is not to say it's necessarily wrong.  But until we have some proof
one way or the other, the Chinese Room argument only proves that if you
believe in "causal powers" you don't believe in strong AI, and so what?
 
2. (semantics)

>Moreover, Searle presents two arguments (the Chinese Room and "syntax
>isn't enough for semantics") that (if correct) show that something
>that can't be captured by a program is involved.

"Syntax isn't enough for semantics" is offered as an axiom, that is, it's
not an argument but an assumption.  The Chinese Room is intended to be
a demonstration of the plausibility of the assumption (see Sci Am p. 27,
Axiom 3 and preceding paragraph).

Now, the Chinese Room convinces me, without a doubt, that a CPU cannot be
said to understand anything.  (In fact a real CPU is quite a bit dumber 
than Searle-- whether or not Searle would come to understand Chinese by
executing the program, as some people insist on speculating, a CPU certainly
will not.)

But what does it prove when we add in the systems reply and the robot reply?
I find Searle's response to this (p. 30) incredibly lame-- he imagines 
the man in the room internalizing the entire program.  But what if this is
simply impossible?  Perhaps the program is implemented as a neural network
with 10 billion neurons.  It simply makes no sense to claim that the man
in the room could memorize it-- and if he can't, there is no argument 
against the systems reply.  

As for the robot, I would claim that our notion of semantics is rooted in
precisely our experience with the world.  Your understanding of the word 'cat'
depends on years of experience with and knowledge about real cats (and about
the rest of the world).  If this notion is true, then a robot, with 
a similar amount of experience and knowledge, could be said to demonstrate
understanding.  Naturally, in the face of this Searle would continue to
intone his mantra about "syntax is not semantics."  Again, this strikes me
as nothing more than prejudice.

3. (minor points)
 
>I think you're mixing together several different things.  Materialism
>as opposed to, say, dualism is one thing, and not, I think, a minority
>viewpoint.  In any case, Searle's conclusion that something more than
>running the right program is involved in understanding doesn't depend
>on materialism; it could also work with dualism.

I wasn't referring to materialism.  Searle believes that mental phenomena
are essentially physical processes-- that's why they can't be simulated.
An alternative viewpoint is that they are informational or symbolic
phenomena (and thus can be simulated).  This viewpoint is just as
materialist as Searle's.

Dualism certainly offers a nice distinction between syntax and semantics:
semantics might only be possible for souls.  But Searle isn't a dualist,
so this position is not available to him.

>So at low levels understanding is not present in humans.  That
>hardly shows that at high levels it is present in computers.

No, it just shows that reductive arguments apply as well to humans as to
computers, and that systems with understanding (e.g. brains) can be built
out of components without it.


