From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima Mon Dec 16 11:01:17 EST 1991
Article 2055 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <307@tdatirv.UUCP>
Date: 11 Dec 91 18:08:51 GMT
References: <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu> <1991Dec5.191043.10565@psych.toronto.edu> <302@tdatirv.UUCP> <1991Dec9.172000.3236@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 121

In article <1991Dec9.172000.3236@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|>Then humans do not understand either.  Or both humans and computers can
|>understand if programmed for semantics as well as syntax (whatever that
|>may mean).
|
|Whatever *syntax* might mean?!!!

No, I meant 'whatever (programmed for semantics as well as syntax) might mean'
The scope of the 'that' was the entire preceding clause, not the single word.

|>The serious error is Searle's reasoning is that he has *never* shown any
|>*objective* evidence that my brain is doing anything that a computer attached
|>to appropriate input devices could not do.
|
|He is providing a *logical* argument.  It is true (Searle asserts) due to the
|meaning of the terms.  No evidence is required.

I require evidence.  I see too much evidence from neurophysiology that the
mind is a *functional* result of the operation of the brain to merely accept
a syllogism that concludes the opposite without evidence.

Searle *must* provide evidence that the brain uses some non-computable means
in establishing meaning before I will admit that his logic is based on a
valid premise.

Until then I will simply continue to maintain that his case is 'speculative'
rather than compelling.

|However, the performance of the Chinese Room demonstration could easily 
|provide the objective evidence you seek.  I would still claim that, no
|matter what input devices you hook up, you would still not understand Chinese.

And I maintain that unless it understands Chinese it will be unable to fool
native Chinese speakers for more than a few minutes.

That is the context dependency, and the historical/referential complexity
of dialog will trip up any mechanism that does not maintain congruent internal
models of the discourse *and* the world.  And I maintain that such a congruent
internal model *is* semantics.  This is certainly consistant with all current
neurological and psychological results on human perception and linguistic
performance.

Again, given the apparent sufficiency of hierarchical neural systems, without
any special mechanisms, to explain animal (including human) behavior, I do
not accept Searle's premise that the room does not understand.

He must show me some *relevant* mechanism in neural activity along with some
experimental or observational evidence that this mechanism actually pertains
to cognition before I will accept his conclusions.

Merely stating that 'obviously' the Chinese Room does not understand does
not convince me, he must *demonstrate* it does not.  He has failed to do so.

|Here, Searle would disagree with you.  By analogy, my knowledge of elasticity
|suggests that all of the functions of elasticity are based on physical
|properties in discrete elements.  But I can't conclude that an appropriately
|programmed computer is elastic.

True, but elasticity is a *physical* property, as far as I know cognition
is an *informational* property.  Unless he can show that the physical
conformation of the information is relevant to cognition, using neurological
or psychological research, this simply becomes one of his unproven premises.

And I consider it to be one of the weakest premises in his argument.  The
brain of any animal seems, according to current knowledge, to be primarily
an *information* transducing device.  My understanding of information theory
is that form does not matter to information per se.

|Searle argues (although I do not necessarily agree with him) that it is
|precisely the *physical* aspects of the electro-chemical reactions, and
|*not* merely their formal properties, which are necessary for understanding.

Unfortunately, that is *all* he does, he *argues* this. He fails to
demonstrate it.  And since his postion seems to me to be contrary to the
current state of neurology, I reject it as unfounded speculation.
|>
|>I do doubt that a pure algorithm, lacking any sensory input modalities
|>could show intelligence.  But computers are jsut as capable of processing
|>and encoding semse data as the human nervous system.
|
|But Searle's argument *assumes* that such inputs aren't necessary, that is,
|he allows strong AI its *strongest* form.  You can include special inputs
|if you like, but that merely argues *against* the possibility of 
|purely computational AI, which Searle is quite happy to assume (at least for
|the moment...)

Hmm, well, then I guess I too disagree with the 'strong' form of AI.  But I
have yet to see any recent AI worker propose such a ridiculously tight
definition of 'AI'.  I see the 'strong' AI position as being merely that
intelligence is computable by *some* means.  I do not make any demands
about it not using input devices!

If Searle is arguing against the position that a 'pure' computer cannot
be intellignent, he is arguing against a straw-man, not against anything
real AI researchers are doing.
|>Or how about a challenge to Searle's definition of semantics which excludes
|>the very method by which the human brain establishes meaning, namely
|>association of 'symbols' with encoded sensory data.
|
|Association *by itself* is not meaning.

I say it is, this certainly seems to be what the human brain does.  There
is no observable evidence that it does anything else.  Unless there is some
way of testing what else the brain is supposed to be doing in establishing
meaning, this whole idea remains pure speculation.
|>
|>Thus, I maintain that computers are just as capable of semantic processing
|>as are humans.  Thus his argument, while strictly true, does not apply to
|>real computers, only to his naive preconceptions about computers.
|
|What do you mean by "real computers"?  Searle's argument *in principle*
|applies to any architecture, even connectionism.

Then it applies to the human brain as well.   This is *my* point, he has
shown no way in which the brain differs from a sutiable machine architecture.

And I will not assume any additional functionality without evidence.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



