From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:34:08 EST 1992
Article 2558 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <5907@skye.ed.ac.uk>
Date: 8 Jan 92 19:21:53 GMT
References: <5814@skye.ed.ac.uk> <1991Dec06.233615.27051@spss.com> <5826@skye.ed.ac.uk> <1991Dec11.180924.37884@spss.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 184

In article <1991Dec11.180924.37884@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <5826@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>>If Searle's right about the CR, it doesn't matter what the program is.
>
>Right about what exactly?  Searle makes a number of statements, with varying 
>degrees of plausibility.  With regard to the robot reply the most relevant
>seems to be the assertion that computers have "no semantics."

If Searle's right that the CR doesn't understand chinese, it doesn't
matter what the program is.  That's the whole point of the "rules": to
be whatever program is supposedly the right one.  His argument doesn't
depend on it being a particular kind of program.  Since it doesn't
matter what the program is, it doesn't matter whether it contains
a "huge amount of sensory and motor experience, and the concepts
associated with them" (if we assume that "concepts" isn't meant
in some question-begging sense.)

>Let's not merely talk about sensors; let's say the algorithm uses the exact
>same architecture as the human brain, precisely duplicating the function of
>every neuron, synapse, and neurotransmitter.  Such a machine might have an
>external behavior truly indistinguishable from that of a human being.  Even
>so, according to Searle, it would not show understanding.

If it has the right "causal powers", it would have understanding.  But
it wouldn't have it just by virtue of running the right program; it
would have to duplicate at least some of the physical functioning of
neurons, etc, not just the aspects that can be abstracted from their
physical base and simulated in a computer.

Now, what is your evidence that a simulation would have understanding?
Just that it has the right behavior?  In that case, we've made no
advance on the CR situation we started with.

>Picture a program used to maintain the books of a bank.  When deposits or
>withdrawals are made, the program adjusts the values found at various disk
>locations.  Are these adjustments financial transactions, or merely
>simulations of financial transactions?

I don't see why this example is supposed to be helpful.  We can all
agree that a computer does some things.  For instance it can send a
signal that causes a disk head to move, and it can cause different
magnetic values to be recorded on the disk.  If "financial transaction"
can include something like that, then computers can perform financial
transactions; but that gets us no further than our original
observation that computers could record on disks, something
that Searle would not dispute.

It certainly does nothing to show that Searle is wrong to say:

>                                       (Sci. Am. 1/90, p. 29).  "One
>can imagine a computer simulation of the oxidation of hydrocarbons in a car
>engine or the action of digestive enzymes in a stomach when it is digesting
>pizza.  And the simulation is no more the real thing in the case of the 
>brain than it is in the case of the car or the stomach."

In any case, you seem to have completely abandoned the robot reply by
this point.

>There are really two categories of things: A) Things which can only be 
>simulated by a program, such as oxidation, digestion, weather; B) Things it
>can directly manipulate, such as money, ASCII text, numbers, or other
>algorithms.  The question is, do meaning, perceptions, thoughts, and 
>understanding belong to category A or B?

If you're talking about "duplicating the physical function of every
neuron, synapse, and neurotransmitter" then you're clearly in (a).
Searle argues that something of that sort is necessary.  (This is
the famed "causal powers of the brain".)

>Clearly physical processes belong to category A; category B is restricted
>to non-physical, symbolic things.  But thought might be such a thing.

Unfortunately, this does nothing to rescue your earlier point
that involved duplicating the function of neurons, etc.

Moreover, Searle presents two arguments (the Chinese Room and "syntax
isn't enough for semantics") that (if correct) show that something
that can't be captured by a program is involved.

>Searle places it in category A.  "All mental phenomena, then, are caused by
>neurophysiological processes in the brain."  Thought is a physical phenomenon
>--the brain secretes thought as the electric eel produces electricity.
>
>Well, it's a point of view, but there are alternatives.  For instance:
>1) Mind is spiritual, and spiritual things cannot be computed (they fall into
>category A).  2) Thought is abstract and symbolic, falling into category B,
>and can be instantiated with either brains or algorithms.  3) Thought is
>immaterial but has properties which cannot be modelled on a Turing machine.
>4) Nobody knows how mind and brain are related and the whole question is
>open.  Searle provides no reasons for preferring his own categorization,
>which indeed he introduces as an axiom, not a conclusion.

You are right that there are alternatives such as dualism.  But even
if you take (2), would that mean that thought isn't caused by
neurophysiological processes in the brain?  Short of dualism, what
else is there?  In any case, it's not true that Searle provides no
reasons for preferring his categorization.  Perhaps he doesn't in
the Sci Am article, but that's not all he's written on the subject.

>Searle's argument also depends on the assertion that computers are incapable
>of meaning-- they have "no semantics."  Unfortunately he never defines what
>meaning is, except to say that thoughts have meaning because "they can be
>about objects and states of affairs in the world" (p. 27).  Why can't 
>algorithms contain structures which refer to objects and states of affairs
>in the world?  Ah, because all mental phenomena (presumably including meaning)
>are physical, caused by "neurophysiological processes."

Why do you insist on ignoring almost all of Searle's arguments and
acting as if all he'd ever said was "all mental phenomena are caused
by neurophysiological processes in the brain"?

In any case, it's trivial for programs to have structures that
refer to things in the world from the point of view of the programmer.
If I say in Lisp (setf (get 'apples 'colour) 'red), I could intend
for these symbols to have the obvious meaning.  But this is what's
called "derived intentionality"; it refers to because we (humans)
give it a meaning.  It doesn't have anything to do with apples
so far as the computer is concerned.  The question is how can the
computer get into the situation we're in and have some original
intentionality rather than only derived intentionality.

>I would like to see an elaboration of this theory of meaning as a physical
>phenomenon.  I would also like to see Searle admit that this material theory
>of meaning is something of a minority viewpoint.  But of course if he could
>entertain a non-material conception of meaning, he would have no argument
>that algorithms are incapable of it.

I think you're mixing together several different things.  Materialism
as opposed to, say, dualism is one thing, and not, I think, a minority
viewpoint.  In any case, Searle's conclusion that something more than
running the right program is involved in understanding doesn't depend
on materialism; it could also work with dualism.

>How does our robot, the one which duplicates the structure and external 
>behavior of the brain, fail to mean?  You call it by name and it responds.  
>You ask it to pick up your handkerchief, or repair your spacecraft, and it 
>complies.  Nor is it merely a matter of outward behavior:

Everything you have described so far is behavior.  Do you think
this behavior is irrelevant, or does it show something on its own?

>by hypothesis, its internal cognitive functioning is the same as
>the brain's. 

Searle's arguements are about all programs, whether they're based
on high-level rules or low-level simulations.  Moreover Cog Sci
will be a nearly complete failure if the closest it can come to
a computational model of understanding is a brain simulation.

It seems that you're arguing along the lines of "how could it
possibly be that something that made a functional (but not physical)
duplication of the low-level operations in the brain fail to
understand?"  But Searle has answered that: because some of
the physical properties (or ones with equivalent causal powers)
are necessary.  (You could make a dualist version of the same
claim, but in eithe rcase something that wasn't captured by the
program would be needed.)

> Like you, the robot has extensive actual experience which
> backs up the way it uses words.

"Experience" is another word that's in danger of begging the
question.

>It's true that the robot's mental phenomena (meaning, thought, etc.) seem
>to disappear as we look at increasingly lower levels of the algorithm and
>at its underlying hardware.  The CPU, or the man in the Chinese room, do not
>share the robot's understanding.  But we should not attach too much weight
>to this, for we can do the same thing with the brain, even under Searle's
>understanding of mental phenomena.  To Searle, brains (somehow) refer;
>but do neurons?  do molecules?  do atoms?  do quarks?  At some reductive
>level the mental phenomena, in brains or robots, simply disappear.

So at low levels understanding is not present in humans.  That
hardly shows that at high levels it is present in computers.
Searle's arguments don't sepend on it being present at low levels
in humans.

So the most you can get is: Searle has shown only that it't
absent at low levels in machines.  But this is just the system
reply again (the man in the room is just the CPU, etc.) about
which much has been said already.

-- jd


