From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Dec 16 11:01:13 EST 1991
Article 2048 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Searle, again
Message-ID: <1991Dec11.230822.698@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1991Dec06.233615.27051@spss.com> <5826@skye.ed.ac.uk> <1991Dec11.180924.37884@spss.com>
Date: Wed, 11 Dec 1991 23:08:22 GMT

In article <1991Dec11.180924.37884@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
[discussion of Searle's claim that computers can only simulate, not
 instantiate, minds]

>This is an interesting claim, but is it true?  Is it the doom of the computer
>to be able to do nothing but simulate?
>
>Picture a program used to maintain the books of a bank.  When deposits or
>withdrawals are made, the program adjusts the values found at various disk
>locations.  Are these adjustments financial transactions, or merely
>simulations of financial transactions?
>
>It's hard to maintain that the computer's transactions are anything but real.
>If the computer changes a certain value somewhere on disk to n, then $n is
>what you have in the bank.  Your money has become data.  There is nothing more
>concrete in the bank that corresponds to your money (certainly not the bank's
>store of cash, which is much less than the sum of all depositors' balances).

It is only because we (or ore precisely, the bank) *interpret* what the computer
does as being about money that such transactions are "real".  I only have 
N dollars in the bank because the bank gives this interpretation to the program.
The same program could be used to calculate the number of apples I have,
the number of people in a country, etc., depending on the *interpretation*.
The program is not *about* (refer to) money -- it is not *about* ANYTHING. 

[other like examples deleted]

>There are really two categories of things: A) Things which can only be 
>simulated by a program, such as oxidation, digestion, weather; B) Things it
>can directly manipulate, such as money, ASCII text, numbers, or other
>algorithms.  The question is, do meaning, perceptions, thoughts, and 
>understanding belong to category A or B?
>
>Clearly physical processes belong to category A; category B is restricted
>to non-physical, symbolic things.  But thought might be such a thing.
>
>Searle places it in category A.  "All mental phenomena, then, are caused by
>neurophysiological processes in the brain."  Thought is a physical phenomenon
>--the brain secretes thought as the electric eel produces electricity.
>
>Well, it's a point of view, but there are alternatives.  For instance:
>1) Mind is spiritual, and spiritual things cannot be computed (they fall into
>category A).  2) Thought is abstract and symbolic, falling into category B,
>and can be instantiated with either brains or algorithms.  3) Thought is
>immaterial but has properties which cannot be modelled on a Turing machine.
>4) Nobody knows how mind and brain are related and the whole question is
>open.  Searle provides no reasons for preferring his own categorization,
>which indeed he introduces as an axiom, not a conclusion.

This is a misunderstanding of Searle.  Searle's argument against strong AI
is that it subscribes to the view that syntax can yield semantics, that
manipulating marks purely on the basis of their shape can yield meaning.
Searle takes this to be a *logical* impossibility.  His *conclusion* is
that minds cannot solely be the result of formal computation, but must
somehow arise due to the physico-chemical nature of the brain.  I personally 
am leary of this conclusion, but I think his argument is correct.

Remember that in alternative 2) above, the term "symbol" is used.  If you
*truely* mean "symbol", then you must explain how such things end up
*representing* other things, how they take on meaning.  Otherwise, you
are simply talking about the pushing around of "marks", "squiggles" without
any reference.  It is important not to equivocate on these two potential
meanings of "symbol", as many of Searle's critics do, because it is a crucial
distinction for the argument.  The whole point of the Chinese Room demonstration
is that what can appear to be "symbols" that stand for things to someone 
outside the room are merely meaningless squiggles, manipulated solely
on the basis of their shape and *not* on what they refer to in the outside
world.

>Searle's argument also depends on the assertion that computers are incapable
>of meaning-- they have "no semantics."  Unfortunately he never defines what
>meaning is, except to say that thoughts have meaning because "they can be
>about objects and states of affairs in the world" (p. 27).  Why can't 
>algorithms contain structures which refer to objects and states of affairs
>in the world? 

Because syntax can't yield semantics.  The reference that "symbols" in
algorithms have is due solely to our interpretation of them.  We could
easily interpret them in other ways, and we would be just as correct.  They
have no *inherent* meaning.

> Ah, because all mental phenomena (presumably including meaning)
>are physical, caused by "neurophysiological processes."

Don't confuse Searle's negative argument, namely, that computers can't have
semantics, with his postive argument, namely, minds have semantics only in
virtue of their physical makeup.  There are other possible ways in which
minds might come to have semantics, even though computers don't.  The reason
that computers can't have semantics has nothing to do with how the
brain generates semantics, apart from the fact that, according to Searle,
it doesn't do it through syntax alone, which would be logically impossible.

>I would like to see an elaboration of this theory of meaning as a physical
>phenomenon.

Me too.  I find his positive thesis opaque, and, if I understand it
correctly, incoherent.  But that does not mean that his negative
argument is incorrect.  It merely means he is incorrect about how
human minds get around the problem. 
 
>  I would also like to see Searle admit that this material theory
>of meaning is something of a minority viewpoint. 

Well, that is certainly the case in AI, but then for AI to proceed it
must deny a material theory of meaning.  There may be others *outside*
AI which have similar views to Searle's, although I'm not sure.  Be that
as it may, we certainly don't want to base our philosophy on census, do
we?  (After all, at one time a majority of astronomers thought that the
Sun revolved around the Earth...)

> But of course if he could
>entertain a non-material conception of meaning, he would have no argument
>that algorithms are incapable of it.
>
>How does our robot, the one which duplicates the structure and external 
>behavior of the brain, fail to mean?  You call it by name and it responds.  
>You ask it to pick up your handkerchief, or repair your spacecraft, and it 
>complies.  Nor is it merely a matter of outward behavior: by hypothesis,
>its internal cognitive functioning is the same as the brain's.  Like you,
>the robot has extensive actual experience which backs up the way it uses 
>words.

No, it doesn't.  The Chinese Room is a demonstration of this.

>It's true that the robot's mental phenomena (meaning, thought, etc.) seem
>to disappear as we look at increasingly lower levels of the algorithm and
>at its underlying hardware.  The CPU, or the man in the Chinese room, do not
>share the robot's understanding.  But we should not attach too much weight
>to this, for we can do the same thing with the brain, even under Searle's
>understanding of mental phenomena.  To Searle, brains (somehow) refer;
>but do neurons?  do molecules?  do atoms?  do quarks?

You're correct that a material theory of meaning seems to fall apart
under the microscope.  But I find it no harder to imagine that neurons
refer than to imagine that an abstract pattern refers, which is what
AI demands.  

I am quite willing to admit that I am unhappy with the alternative that
Searle has offered.  But this in no way invalidates his attack of AI.

- michael





