From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Dec 16 11:01:34 EST 1991
Article 2086 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec13.041256.18178@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <302@tdatirv.UUCP> <1991Dec9.172000.3236@psych.toronto.edu> <307@tdatirv.UUCP>
Date: Fri, 13 Dec 1991 04:12:56 GMT

Round and round we go...

In article <307@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1991Dec9.172000.3236@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>|>Then humans do not understand either.  Or both humans and computers can
>|>understand if programmed for semantics as well as syntax (whatever that
>|>may mean).
>|
>|Whatever *syntax* might mean?!!!
>
>No, I meant 'whatever (programmed for semantics as well as syntax) might mean'
>The scope of the 'that' was the entire preceding clause, not the single word.

My apologies.  I did not catch your meaning.  

>|>The serious error is Searle's reasoning is that he has *never* shown any
>|>*objective* evidence that my brain is doing anything that a computer attached
>|>to appropriate input devices could not do.
>|
>|He is providing a *logical* argument.  It is true (Searle asserts) due to the
>|meaning of the terms.  No evidence is required.
>
>I require evidence.  I see too much evidence from neurophysiology that the
>mind is a *functional* result of the operation of the brain to merely accept
>a syllogism that concludes the opposite without evidence.

It is not at all clear to me how neurobiology shows that the mind is the
*functional* result of the brain.  It does show that it is the *physical*
result, that mind is dependent upon the brain, and that if the brain
is damages, changes in the mind will result.  Searle is quite happy with
this conclusion, as he is a materialist, not a dualist.  But neurobiology
does not show that mind results from the *functional* arrangement of
the brain's material, at least not as the term "functional" is used
in the context of AI.

>Searle *must* provide evidence that the brain uses some non-computable means
>in establishing meaning before I will admit that his logic is based on a
>valid premise.

Well, Searle doesn't offer much evidence of this claim, but Penrose
certainly argues for it, and some discussion on the Net recently has
also pointed that direction (although I certainly don't claim to understand   
it all).

However, it is not clear to me why the burden of proof falls on Searle.  
I could just as easily ask for evidence that the mind results from the
computable elements of the brain.  Again, I believe that there is no
evidence for this (see above).

>Until then I will simply continue to maintain that his case is 'speculative'
>rather than compelling.

And I will continue to maintain that the proper way to attack the argument
is to attack the premises:

1) Syntax in and of itself is not sufficient for semantics.
2) Computers are purely syntactic.

Therefore computers cannot contain semantics.

It seems to me that a philosophical analysis of *why* Searle is wrong is
required, and not simply a claim that "Well, there's no evidence that
he's right!"  Such a statement does nothing to attack the argument.  It
does not *prove* that Searle is wrong.  Remember that producing a perfect
simulation of a human brain would not prove Searle wrong either, since
we are not debating about the *behaviour*, but about the *understanding*.

>|However, the performance of the Chinese Room demonstration could easily 
>|provide the objective evidence you seek.  I would still claim that, no
>|matter what input devices you hook up, you would still not understand Chinese.
>
>And I maintain that unless it understands Chinese it will be unable to fool
>native Chinese speakers for more than a few minutes.
>That is the context dependency, and the historical/referential complexity
>of dialog will trip up any mechanism that does not maintain congruent internal
>models of the discourse *and* the world.  And I maintain that such a congruent
>internal model *is* semantics.  This is certainly consistant with all current
>neurological and psychological results on human perception and linguistic
>performance.

It may very well be that *humans* models of the world contain semantic
elements, but what is precisely at issue is whether computer programs
contain semantics.  It seems to me that to say "a congruent internal model
*is* semantics" is simply to deny Searle's argument without one of
your own.

>Again, given the apparent sufficiency of hierarchical neural systems, without
>any special mechanisms, to explain animal (including human) behavior, I do
>not accept Searle's premise that the room does not understand.

Searle is happy to grant that brains have minds.  He is quite happy to say
that minds result from the physical properties of brains (no "special
mechanisms").  This is not at all the same as saying that *any* 
implementation of the *formal* aspects of the brain will result in understanding
any more than saying *any* implementation of the *formal* aspects of
rubber will result in elasticity.

>He must show me some *relevant* mechanism in neural activity along with some
>experimental or observational evidence that this mechanism actually pertains
>to cognition before I will accept his conclusions.

And he would argue that, in the face of his argument, you must show him
*how* syntactic systems yield semantics.  Who has the greater burden of proof?

>Merely stating that 'obviously' the Chinese Room does not understand does
>not convince me, he must *demonstrate* it does not.  He has failed to do so.

Well, *I* wouldn't understand Chinese in the Chinese Room.  Even if I
internalized the rules.  

>|Here, Searle would disagree with you.  By analogy, my knowledge of elasticity
>|suggests that all of the functions of elasticity are based on physical
>|properties in discrete elements.  But I can't conclude that an appropriately
>|programmed computer is elastic.
>
>True, but elasticity is a *physical* property, as far as I know cognition
>is an *informational* property.  Unless he can show that the physical
>conformation of the information is relevant to cognition, using neurological
>or psychological research, this simply becomes one of his unproven premises.
>
>And I consider it to be one of the weakest premises in his argument.  The
>brain of any animal seems, according to current knowledge, to be primarily
>an *information* transducing device.  My understanding of information theory
>is that form does not matter to information per se.

Be careful how you use the term "information".  If you use it in the
mathematical, Shannon sense of the word, then "information" carries no
semantic content.  It has no reference.  If, on the other hand, you
are using it in the more colloquial sense, then you need to explicate the
above more fully, since what counts as meaningful information is 
precisely up to the receiver.  This gets us into another of the
classic difficulties in AI, the Frame Problem.

>|Searle argues (although I do not necessarily agree with him) that it is
>|precisely the *physical* aspects of the electro-chemical reactions, and
>|*not* merely their formal properties, which are necessary for understanding.
>
>Unfortunately, that is *all* he does, he *argues* this. He fails to
>demonstrate it.  And since his postion seems to me to be contrary to the
>current state of neurology, I reject it as unfounded speculation.

Again, I fail to see how it is contrary to neurology, although I'd be
happy to hear more details.


>|>Or how about a challenge to Searle's definition of semantics which excludes
>|>the very method by which the human brain establishes meaning, namely
>|>association of 'symbols' with encoded sensory data.
>|
>|Association *by itself* is not meaning.
>
>I say it is, this certainly seems to be what the human brain does.  There
>is no observable evidence that it does anything else.  Unless there is some
>way of testing what else the brain is supposed to be doing in establishing
>meaning, this whole idea remains pure speculation.

If by association you mean simple stimulus-response, then this is certainly
not how we learn the paradgimatic example of semantic systems, language.  See
Chomsky's reply to Skinner.

- michael



