From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!nsisrv!kong!mimsy!kohout Thu Dec 26 23:57:43 EST 1991
Article 2328 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!nsisrv!kong!mimsy!kohout
>From: kohout@cs.umd.edu (Robert Kohout)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle's response to silicon brain?
Message-ID: <45303@mimsy.umd.edu>
Date: 20 Dec 91 22:51:33 GMT
References: <40822@dime.cs.umass.edu> <1991Dec18.172040.3506@spss.com>
Sender: news@mimsy.umd.edu
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
Lines: 53

Mark Rosenfelder writes:
>>performance is no different from a normal human.
>
>"One can imagine a computer simulation of the action of peptides in the
>hypothalamus that is accurate down to the last synapse.  But equally one
>can imagine a computer simulation of the oxidation of hydrocarbons in a car
>engine....  And the simulation is no more the real thing in the case of the
>brain than it is in the case of the car...."
>--Searle, Scientific American Jan. 1990, p. 29.
>
>Searle believes that understanding and other mental phenomena have some
>*physical* basis, tied to their actual implementation in the brain, which
>computers cannot reproduce, although they could simulate them.

Hmmmm. Does this mean that Searle believes that intelligent behavior
is possible without intelligence? Whenever I begin to feel that there
might be something to this Searlism, something like this pops up that
once again seems to imply its absurdity.

Let me elaborate. 

If Searle would be unimpressed by a system that accurately modeled the
activity of a human brain down to the last synapse, one should have
little incentive to impress him. Such a system could be easily used
to instruct a machine how to learn and speak a language, to learn and
play chess, to plan and execute elaborate schemes for accomplishing
elaborate ends. In short, it would meet all the goals of the strong
AI proponents, except that it wouldn't be "the real thing".

I have many times voiced the same objection to the Chinese Room argument,
namely that Searle seems primarily concerned with the fact that,
in such a system, there is apparently no entity which can be said
to understand. If one follows his argument in "Minds, Brains, and
Science" carefully, it turns out that Searle is only atempting to
refute the hypotheses Minds/Brains = Programs/Computers. The fact
that he fails to accomplish this notwithstanding, why should anyone
care if he did? 

The above quoted passage makes it clear that Searle is not concerned
with intelligent behavior, which is all any AI practioner I have
ever met has been concerned with. He is arguing a deeper issue,
whether or not a mere machine could ever be said to be sentient. While
this is no doubt a worthy topic of inquiry, I fail to see how
it impinges upon my attempts to write programs that can generate and
execute reasonable plans in a timely fashion. There are no doubt
limits on our abilities as engineers. I for one wish to be no digital
Frankenstein. On the other hand, if someone in this debate sees how
the correctness of Searle's position in any way implies that,
for example, we will never be able to engineer a fully automatic,
high quality machine translator I wish they'd explain it in a
way that I could understand.

Bob Kohout


