From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Mar 24 09:57:08 EST 1992
Article 4584 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: The Systems Reply I
Organization: Department of Psychology, University of Toronto
References: <1992Mar14.213045.21776@mp.cs.niu.edu> <1992Mar16.224423.29809@psych.toronto.edu> <1992Mar17.004658.29591@mp.cs.niu.edu>
Message-ID: <1992Mar19.021512.11741@psych.toronto.edu>
Date: Thu, 19 Mar 1992 02:15:12 GMT

In article <1992Mar17.004658.29591@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>In article <1992Mar16.224423.29809@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992Mar14.213045.21776@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>>
>>>	There can be no final convincing proof that strong AI is
>>>	possible until there is an actual implementation.
>>
>>No.  This is wrong.  An implementation will *not* demonstrate that it has
>>semantics (or understanding, or qualia, or whatever).  This is *not* a
>>matter of empirical investigation, but of conceptual analysis. 
>
>  Aha!  That explains everything.  Now that I realize you do not understand
>the difference between a necessary condition and a sufficient condition, the
>source of your confusion is suddenly apparent :-).


Gosh, I guess ya caught me, Neil!

But seriously folks, I do agree that an actual implementation is a
necessary condition for a "final, convincing" proof of the possibility
of Strong AI.  But I really don't care all that much about necessary
conditions.  I am much more interested in sufficient ones.  And an
implementation would not be a *sufficient* condition.  This has been (I      
thought) the point of the debate all along.  I've been willing to grant
that the necessary condition could be produced - the assumption that
the Chinese Room is possible is simply the assumption of such an
implementation.  The *real* question is whether or not all that is needed
is such an implementation, that is, whether it is *sufficient* for an
implementation to pass the Turing Test.  The answer here, it seems to
me, is a resounding "no."

- michael




