From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Dec 16 11:01:23 EST 1991
Article 2066 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <4dFul6O00Uh7A37HQ7@andrew.cmu.edu>
Date: 12 Dec 91 18:36:54 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 87

Mark Rosenfelder writes:

> In article <5826@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>> If Searle's right about the CR, it doesn't matter what the program is.
>
><.....>
> Searle's argument also depends on the assertion that computers are incapable
> of meaning-- they have "no semantics."  Unfortunately he never defines what
> meaning is, except to say that thoughts have meaning because "they can be
> about objects and states of affairs in the world" (p. 27).  Why can't 
> algorithms contain structures which refer to objects and states of affairs
> in the world?  Ah, because all mental phenomena (presumably including
meaning)
> are physical, caused by "neurophysiological processes."

Aren't the physical changes that instantiate the algorithm physical?
There are two reasons algorithms, as manipulations of symbols, do not
refer to objects or states of affairs in the world: the first is that
the symbols physically do not in any way resemble the things to which 
we hold them to refer.  The second is that even if they did resemble
the structures of external objects, the physical process of pattern 
matching does not transmit physical structure, hence, inputs to the
system (say from a camera) can only *trigger* further processing.  That
is, the matcher, because it structurally couples to the incoming pattern
can only output a signal that further change should occur.  But this signal
carries no structural information about the input -- it is merely a voltage
change.  Of course, you could set up the system so that the signal triggers 
another process and thus evokes the appropriate behavior, based on the 
input and its current state.  But the system wouldn't understand 
what it was doing because, I claim, there is no transfer of structure.  
In other words, all the action in the system is a bunch of structureless 
outputs from lots of pattern-matcher couplings.  That's why patterns can be
anything we want them to be, because *physically* their matchers look just 
like them.

This is why, when Drew McDermott says,

"All that we require for the sentence to have been used "correctly" is
 that the second robot connect the symbols "Zulu" and "zebra" to the
 types of entities it is now perceiving tokens of.  (Puzzles about what
 if the apparent zebras are really holograms, which cause such trouble
 for Searlites and Putnamites, do not arise for the computationalist.)
 
 (.....) 
 
 The bottom line is that semantics is epiphenomenal, ....",

the crucial term is "connect".  How are they connected?  Semantics, may very
well be epiphenomenal, but the robots, if they are pattern matching systems,
will not experience the "meaning" or "understanding" we experience.  In other
words semantics will not be epiphenomenonal in the robots, only in us because,
and this is my second claim, we are primarily not pattern-matching systems.

> I would like to see an elaboration of this theory of meaning as a physical
> phenomenon.  I would also like to see Searle admit that this material theory
> of meaning is something of a minority viewpoint.  But of course if he could
> entertain a non-material conception of meaning, he would have no argument
> that algorithms are incapable of it.

I don't believe that Searle has a material theory, but I would imagine that
at the moment what I said in the first paragraph above is a minority
viewpoint.  Yet I would call it part of the basis of a physical theory of
meaning.

> How does our robot, the one which duplicates the structure and external 
> behavior of the brain, fail to mean?  You call it by name and it responds.  
> You ask it to pick up your handkerchief, or repair your spacecraft, and it 
> complies.  Nor is it merely a matter of outward behavior: by hypothesis,
> its internal cognitive functioning is the same as the brain's.  Like you,
> the robot has extensive actual experience which backs up the way it uses 
> words.

Depends on what you mean by "duplicates".

> It's true that the robot's mental phenomena (meaning, thought, etc.) seem
> to disappear as we look at increasingly lower levels of the algorithm and
> at its underlying hardware.  The CPU, or the man in the Chinese room, do not
> share the robot's understanding.  But we should not attach too much weight
> to this, for we can do the same thing with the brain, even under Searle's
> understanding of mental phenomena.  To Searle, brains (somehow) refer;
> but do neurons?  do molecules?  do atoms?  do quarks?  At some reductive
> level the mental phenomena, in brains or robots, simply disappear.

Think of what would be physically informational.  Does a neuron or molecule
or atom or quark contain structural information about the input?

-Frank


