From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!ncar!elroy.jpl.nasa.gov!swrinde!mips!pacbell.com!att!linac!uchinews!spssig!markrose Mon Dec 16 11:01:10 EST 1991
Article 2043 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!ncar!elroy.jpl.nasa.gov!swrinde!mips!pacbell.com!att!linac!uchinews!spssig!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <1991Dec11.180924.37884@spss.com>
Date: 11 Dec 91 18:09:24 GMT
References: <5814@skye.ed.ac.uk> <1991Dec06.233615.27051@spss.com> <5826@skye.ed.ac.uk>
Organization: SPSS, Inc.
Lines: 92
Nntp-Posting-Host: spssrs7.spss.com

In article <5826@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>If Searle's right about the CR, it doesn't matter what the program is.

Right about what exactly?  Searle makes a number of statements, with varying 
degrees of plausibility.  With regard to the robot reply the most relevant
seems to be the assertion that computers have "no semantics."

Let's not merely talk about sensors; let's say the algorithm uses the exact
same architecture as the human brain, precisely duplicating the function of
every neuron, synapse, and neurotransmitter.  Such a machine might have an
external behavior truly indistinguishable from that of a human being.  Even
so, according to Searle, it would not show understanding.

Why not?  Because it's a simulation, he says (Sci. Am. 1/90, p. 29).  "One
can imagine a computer simulation of the oxidation of hydrocarbons in a car
engine or the action of digestive enzymes in a stomach when it is digesting
pizza.  And the simulation is no more the real thing in the case of the 
brain than it is in the case of the car or the stomach."

This is an interesting claim, but is it true?  Is it the doom of the computer
to be able to do nothing but simulate?

Picture a program used to maintain the books of a bank.  When deposits or
withdrawals are made, the program adjusts the values found at various disk
locations.  Are these adjustments financial transactions, or merely
simulations of financial transactions?

It's hard to maintain that the computer's transactions are anything but real.
If the computer changes a certain value somewhere on disk to n, then $n is
what you have in the bank.  Your money has become data.  There is nothing more
concrete in the bank that corresponds to your money (certainly not the bank's
store of cash, which is much less than the sum of all depositors' balances).

If you don't like that example, here's another: When you use your favorite
text editor to cut and paste, does the program operate on your text, or on a
simulation of the text?  When you ask for the square root of 4, do you 
get 2 or a simulation of 2?  When you compile a C program, is the computer
only simulating a compile?

There are really two categories of things: A) Things which can only be 
simulated by a program, such as oxidation, digestion, weather; B) Things it
can directly manipulate, such as money, ASCII text, numbers, or other
algorithms.  The question is, do meaning, perceptions, thoughts, and 
understanding belong to category A or B?

Clearly physical processes belong to category A; category B is restricted
to non-physical, symbolic things.  But thought might be such a thing.

Searle places it in category A.  "All mental phenomena, then, are caused by
neurophysiological processes in the brain."  Thought is a physical phenomenon
--the brain secretes thought as the electric eel produces electricity.

Well, it's a point of view, but there are alternatives.  For instance:
1) Mind is spiritual, and spiritual things cannot be computed (they fall into
category A).  2) Thought is abstract and symbolic, falling into category B,
and can be instantiated with either brains or algorithms.  3) Thought is
immaterial but has properties which cannot be modelled on a Turing machine.
4) Nobody knows how mind and brain are related and the whole question is
open.  Searle provides no reasons for preferring his own categorization,
which indeed he introduces as an axiom, not a conclusion.


Searle's argument also depends on the assertion that computers are incapable
of meaning-- they have "no semantics."  Unfortunately he never defines what
meaning is, except to say that thoughts have meaning because "they can be
about objects and states of affairs in the world" (p. 27).  Why can't 
algorithms contain structures which refer to objects and states of affairs
in the world?  Ah, because all mental phenomena (presumably including meaning)
are physical, caused by "neurophysiological processes."

I would like to see an elaboration of this theory of meaning as a physical
phenomenon.  I would also like to see Searle admit that this material theory
of meaning is something of a minority viewpoint.  But of course if he could
entertain a non-material conception of meaning, he would have no argument
that algorithms are incapable of it.

How does our robot, the one which duplicates the structure and external 
behavior of the brain, fail to mean?  You call it by name and it responds.  
You ask it to pick up your handkerchief, or repair your spacecraft, and it 
complies.  Nor is it merely a matter of outward behavior: by hypothesis,
its internal cognitive functioning is the same as the brain's.  Like you,
the robot has extensive actual experience which backs up the way it uses 
words.

It's true that the robot's mental phenomena (meaning, thought, etc.) seem
to disappear as we look at increasingly lower levels of the algorithm and
at its underlying hardware.  The CPU, or the man in the Chinese room, do not
share the robot's understanding.  But we should not attach too much weight
to this, for we can do the same thing with the brain, even under Searle's
understanding of mental phenomena.  To Searle, brains (somehow) refer;
but do neurons?  do molecules?  do atoms?  do quarks?  At some reductive
level the mental phenomena, in brains or robots, simply disappear.


