From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon Mar  9 18:35:53 EST 1992
Article 4323 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: strong AI (Was: Re: Definition of understanding)
Organization: Department of Psychology, University of Toronto
References: <1992Mar4.190304.16485@beaver.cs.washington.edu> <1992Mar5.202705.2733@psych.toronto.edu> <1992Mar5.215201.9114@beaver.cs.washington.edu>
Message-ID: <1992Mar6.212151.17298@psych.toronto.edu>
Date: Fri, 6 Mar 1992 21:21:51 GMT

In article <1992Mar5.215201.9114@beaver.cs.washington.edu> pauld@cs.washington.edu (Paul Barton-Davis) writes:
>In article <1992Mar5.202705.2733@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>If no one in computer science bought into Strong AI, then there would
>>be as much point in having *this* newsgroup as there would be
>>"comp.weather_modelling.philosophy," as the philosophical implications
>>of both of these projects would be equivalent, i.e., nil.  The primary
>>reason that philosophers are interested in AI is because of the strong
>>functionalist claim made by *many* of its still-current members.
>>Getting computer programs to do nifty things may be fun, but if AI folks
>>don't believe that they are generating minds, then AI is no different in
>>its philosophical ramifications than any other simulation discipline.
>
>I never made the claim that AI folk don't believe they might one
>generate minds.

According to Searle's definition, this *is* "Strong AI".  Remember that
the term is in comparison with "Weak AI", which merely claims to 
*simulate* minds, and not produce them.

> The point it that the "strong AI" agenda laid down by
>Newell, Minsky et al. in the later 1950's and on through the 70's is
>not the primary focus of most of those who now believe they might one
>day accomplish "mind creation".

Once again, the specifics of the architecture used make *no* difference
to the CR argument (at least, this is what Searle claims).  Sure, Newell &
Simon and Minsky may be out of fashion now, and connectionism may be in
vogue.  This change does nothing to the force of Searle's argument.  It
stands (or falls) purely on the hypothesis that syntatic manipulations
alone can produce semantics.  This is the hypothesis of *anyone* who
believes that computers can produce minds, whatever approach they favor.
(This is all made explicit in his original article, and in the recent
follow-up to it in Scientific American.)

> Searle's CR does a reasonable job of
>showing why the simplicities of strong AI won't support a genuinely
>cognitive system; but it simultaneously illustrates, care of the
>systems reply, how that might be done. The more cognitively oriented
>connectionists are all chipping away at minor aspects of this, and
>their hypotheses and models are not of the "strong AI" mold at all.
>They do, however, retain the functional claim.

If they retain functionalism, they simply *are* Strong AI, by definition.
I believe that you have confused the philosophical commitments (which is
what Searle is concerned with) with the methodology.

>
>>If the Strong AI position is a strawman, so be it.  But then why the
>>heck is everyone so upset about the Chinese Room?
>
>Because folks like yourself and Mr. Green are using it to claim that
>no system that includes a CR can ever have anything that it is to be
>like. Since we clearly include CR-like capabilities, this is a
>patently false claim, and one worth getting upset about.

We do indeed have the capabilities of a Chinese Room.  The question is
by what means these capabilities are produced.  Do not confuse the two points.
The first point is obvious.  It is the second which is under debate.

- michael




