From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!tdatirv!sarima Mon Dec 16 11:02:01 EST 1991
Article 2132 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <319@tdatirv.UUCP>
Date: 14 Dec 91 19:31:20 GMT
References: <302@tdatirv.UUCP> <1991Dec9.172000.3236@psych.toronto.edu> <307@tdatirv.UUCP> <1991Dec13.041256.18178@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 184

In article <1991Dec13.041256.18178@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|
|It is not at all clear to me how neurobiology shows that the mind is the
|*functional* result of the brain.  It does show that it is the *physical*
|result, that mind is dependent upon the brain, and that if the brain
|is damages, changes in the mind will result. ...

I am not entirely sure what the distinction here is.  If the operations of
the physical structures of the brain give rise, possibly through an
intermediate layer of operations, to the set of operations we call 'mind',
then how can the mind *not* be a functional (or perhaps emergent) result
of the brain?

|  But neurobiology
|does not show that mind results from the *functional* arrangement of
|the brain's material, at least not as the term "functional" is used
|in the context of AI.

It is the result of the arrangement and *operation* of the brain's material.

|However, it is not clear to me why the burden of proof falls on Searle.  
|I could just as easily ask for evidence that the mind results from the
|computable elements of the brain.  Again, I believe that there is no
|evidence for this (see above).

I guess this is where we differ, I see the current synergy between
computational neural-nets research and neurology to be an indication of
the power of that approach.  [The surprising result that the operation
of the olfactory cortex can be modelled with existing NN's is a major
step in the direction of validating this approach].

|It seems to me that a philosophical analysis of *why* Searle is wrong is
|required, and not simply a claim that "Well, there's no evidence that
|he's right!"  Such a statement does nothing to attack the argument.  It
|does not *prove* that Searle is wrong.

Of course it doesn't!  It merely shows that Searle's reaoning is itself
not a proof of anything.

In the current state of neurology, this issue cannot be finally decided,
the necessary evidence is not yet in.  But there is, as yet, no compelling
reason to blindly accept Searle's reasoning.  It must either be validated or
refuted by scientific research.  Philosophical debates will *not* resolve
the question, since it is one of how the human mind derives meaning, and
that is a question for neurologists.

| Remember that producing a perfect
|simulation of a human brain would not prove Searle wrong either, since
|we are not debating about the *behaviour*, but about the *understanding*.

But I would maintain that a perfect simulaion *would* understand.
Understanding is the subjective process of believing that you understand.

|>That is the context dependency, and the historical/referential complexity
|>of dialog will trip up any mechanism that does not maintain congruent internal
|>models of the discourse *and* the world.  And I maintain that such a congruent
|>internal model *is* semantics.  This is certainly consistant with all current
|>neurological and psychological results on human perception and linguistic
|>performance.
|
|It may very well be that *humans* models of the world contain semantic
|elements, but what is precisely at issue is whether computer programs
|contain semantics.  It seems to me that to say "a congruent internal model
|*is* semantics" is simply to deny Searle's argument without one of
|your own.

But what does it mean to say that 'human models of the world contain semantic
elements'?  Unless there is something there that is not present in a computer
model, there is no difference.

At the present level of knowledge of neurology, it seems that 'semantic
elements' are likely to be just associations between stimulus sets at a
level above the raw sense data.

Essentially, Searle's argument seems to boil down to the equivalent of the
following:

Start with a heap of sand, now start removing grains of sand.
At what point does it stop being a heap?

It is based on an inappropriate *binary* dichotomy between 'semantics'
and 'response'.  A simple thermometer certainly does not show 'menaing',
nor, probably do our simple AI systems of today (as the equivalent of
a few grains of sand).  But the mind of any vertebrate involves many more
sense associations and interactions than any existing computational model.
So there we *do* have a heap, or rather meaning.

|Searle is happy to grant that brains have minds.  He is quite happy to say
|that minds result from the physical properties of brains (no "special
|mechanisms").  This is not at all the same as saying that *any* 
|implementation of the *formal* aspects of the brain will result in
|understanding any more than saying *any* implementation of the *formal*
|aspects of rubber will result in elasticity.

You are again using a *physical* analogy.  For this analogy to apply
the mind must be a physical entity, rather than an informational entity.
An informational entity is fully identified by its formal properties.
So, how is the *mind* (not the brain) a physical entity?

|>He must show me some *relevant* mechanism in neural activity along with some
|>experimental or observational evidence that this mechanism actually pertains
|>to cognition before I will accept his conclusions.
|
|And he would argue that, in the face of his argument, you must show him
|*how* syntactic systems yield semantics.  Who has the greater burden of proof?

The theory that postulates more relevant factors, that is Searle's.
And, given that I believe that semantics is essentially just association,
I have nothing to prove.  My definition of semantics *entails* the explanation.
Searle must now show why my definition of meaning does not apply to humans.

|>Merely stating that 'obviously' the Chinese Room does not understand does
|>not convince me, he must *demonstrate* it does not.  He has failed to do so.
|
|Well, *I* wouldn't understand Chinese in the Chinese Room.  Even if I
|internalized the rules.  

So?  *You* are not the Chinese Room, just a component of it.
The room is a seperate person.  You can no more expect to 'understand'
everything it does than a neuron in your brain can expect to understand
everything you do.

|>And I consider it to be one of the weakest premises in his argument.  The
|>brain of any animal seems, according to current knowledge, to be primarily
|>an *information* transducing device.  My understanding of information theory
|>is that form does not matter to information per se.
|
|Be careful how you use the term "information".  If you use it in the
|mathematical, Shannon sense of the word, then "information" carries no
|semantic content.  It has no reference.  If, on the other hand, you
|are using it in the more colloquial sense, then you need to explicate the
|above more fully, since what counts as meaningful information is 
|precisely up to the receiver.  This gets us into another of the
|classic difficulties in AI, the Frame Problem.

I certainly do not mean Shannon's 'information'.

I might suggest a definition somewhat like the following:

Information is a set of data structures that allow an entity to solve
problems it is faced with.

In biology, at least, this simply becomes the physical individual, since
that is the unit of natural selection, and therefore the entity that must
solve problems in order to succeed.

Thus the individual defines the frame by its survival needs.

|>Unfortunately, that is *all* he does, he *argues* this. He fails to
|>demonstrate it.  And since his postion seems to me to be contrary to the
|>current state of neurology, I reject it as unfounded speculation.
|
|Again, I fail to see how it is contrary to neurology, although I'd be
|happy to hear more details.

Maybe not 'contrary', merely not supported by.  That is there is no
evidence from behavioral and neurological research that any process
is relevent except the data transformations performed by the neurons.

I guess I am just requiring a higher standard of evidence because Searle's
position seems to me to be postulating something additional, that is not yet
shown to be necessary. And unecessary components in a theory are undesirable.

|>|Association *by itself* is not meaning.
|>
|>I say it is, this certainly seems to be what the human brain does.  There
|>is no observable evidence that it does anything else.  Unless there is some
|>way of testing what else the brain is supposed to be doing in establishing
|>meaning, this whole idea remains pure speculation.
|
|If by association you mean simple stimulus-response, then this is certainly
|not how we learn the paradgimatic example of semantic systems, language.  See
|Chomsky's reply to Skinner.

Not simple stimulus-response.  That would merely be reflex, not associative
memory.  I am talking about the extremely powerful capability of a family
of neurons to categorize, classify, and reconstruct complex patterns of
inputs.

This capability, when applied recursively, seems sufifcient to me to explain
all of the processes we humans call 'meaning'.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


