From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!swrinde!gatech!cc.gatech.edu!terminus!centaur Tue Mar 24 09:56:07 EST 1992
Article 4490 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!swrinde!gatech!cc.gatech.edu!terminus!centaur
>From: centaur@terminus.gatech.edu (Anthony G. Francis)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <centaur.700790865@cc.gatech.edu>
Date: 17 Mar 92 00:07:45 GMT
References: <BL1p0D.6II@world.std.com> <1992Mar14.182737.15329@psych.toronto.edu> <1992Mar14.213045.21776@mp.cs.niu.edu> <1992Mar16.224423.29809@psych.toronto.edu>
Sender: news@cc.gatech.edu
Organization: Georgia Tech College of Computing
Lines: 69

michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Mar14.213045.21776@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>> More to the point:
>>	There can be no final convincing proof that strong AI is
>>	possible until there is an actual implementation.
>No.  This is wrong.  An implementation will *not* demonstrate that it has
>semantics (or understanding, or qualia, or whatever).  This is *not* a
>matter of empirical investigation, but of conceptual analysis. 
>- michael

No, that's simply not correct. An actual implementation of intelligence in
a computer _would_ be a proof of Strong AI. Defining an implementation
of something as _being_ intelligent is the issue; to Searle, the essential
issue is that performance is not a sufficient proof of intelligence, which
he links to the presence of {semantics | understanding | qualia |
intentionality}. To Searle, no computer can have semantics, and therefore
no computer, _no matter what its functionality, no matter how close its
behavior is to a humans, no matter =how= indistinguishable it is from
you or I in any behavioral observable_, can ever be considered intelligent.

This, I think, is the big problem with the Chinese Room. It's an attempt
to show that a functionalist definition of intelligence is insufficent,
and as such is the first logical step towards denying the existence of any
minds other than our own. We know whether or not we "understand" something, 
but we have _no_ privileged insight into other minds, only to our own,
and therefore cannot with certainty generalize that our own privileged
access extends to all individuals of a certain type (e.g., human). This 
is, of course, the purpose behind the Robot Reply: it is an attempt to
place the Chinese Room on the same footing towards us as other humans
with respect to external, observable specifications and degree of access.

Note Searle's claims, in the extreme form of the Chinese Room, are markedly
different from the claims of Penrose, Zeleny or Winograd. Penrose and
Zeleny have both posited cognitive models which exceed the capacity of
Turing Machines and have made _specific_ claims about what human behaviors
those models account for (mathematical performance for Penrose and the 
phenomenon of reference for Zeleny) and what internal architectures (or, 
in the case of Penrose, physical structures and properties) would be required 
to support those models. Winograd's claims are essentially that humans are
a dynamic physical system and that symbol manipulation does not map
in any useful way to the function of the brain (please pardon if
I have brutalized Winograd's position for the sake of brevity).

Searle's claims _in the sources that I have read_ (the CR argument, a good bit
of _Minds, Brains and Science_) are much more poorly specified than even
Penrose's, and lack (as the Chinese Room example demonstrates) any 
possibility of empirical testing or validation. As I've said before, the
Chinese Room (and its child the Memorization Reply) commits grave errors
concerning the distinctions between systems, virtual machines, and
virtual machine levels, errors which, from the point of view of computer
science, are just as severe as any of the Chinese Room's opponents' conflations
of various semantic and syntactic terms are from the point of view of
philosophy.

Far be it for me to say it, but let's let the poor guy out of the Chinese
Room, close the door, and go back to our private little wars over
intension and extension which, if they can ever be explicated to the level
that everyone can understand (ha ha), stand a much greater chance of
converting people to either side than the Chinese Room ever will.

-Anthony Francis
--
Anthony G. Francis, Jr.  - Georgia Tech {Atl.,GA 30332}
Internet Mail Address: 	 - centaur@cc.gatech.edu
UUCP Address:		 - ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!gt4864b
-------------------------------Quote of the post------------------------------ 
"Cerebus doesn't love you ... Cerebus just wants all your money" 
		- Cerebus the Aardvark, from a _Church and State_ T-shirt
------------------------------------------------------------------------------


