From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!jvnc.net!yale.edu!think.com!rpi!batcomputer!cornell!uw-beaver!pauld Mon Mar  9 18:35:32 EST 1992
Article 4293 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!jvnc.net!yale.edu!think.com!rpi!batcomputer!cornell!uw-beaver!pauld
>From: pauld@cs.washington.edu (Paul Barton-Davis)
Newsgroups: comp.ai.philosophy
Subject: strong AI (Was: Re: Definition of understanding)
Message-ID: <1992Mar5.215201.9114@beaver.cs.washington.edu>
Date: 5 Mar 92 21:52:01 GMT
References: <1992Mar4.172020.19505@psych.toronto.edu> <1992Mar4.190304.16485@beaver.cs.washington.edu> <1992Mar5.202705.2733@psych.toronto.edu>
Sender: news@beaver.cs.washington.edu (USENET News System)
Organization: Computer Science & Engineering, U. of Washington, Seattle
Lines: 36

In article <1992Mar5.202705.2733@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>If no one in computer science bought into Strong AI, then there would
>be as much point in having *this* newsgroup as there would be
>"comp.weather_modelling.philosophy," as the philosophical implications
>of both of these projects would be equivalent, i.e., nil.  The primary
>reason that philosophers are interested in AI is because of the strong
>functionalist claim made by *many* of its still-current members.
>Getting computer programs to do nifty things may be fun, but if AI folks
>don't believe that they are generating minds, then AI is no different in
>its philosophical ramifications than any other simulation discipline.

I never made the claim that AI folk don't believe they might one
generate minds. The point it that the "strong AI" agenda laid down by
Newell, Minsky et al. in the later 1950's and on through the 70's is
not the primary focus of most of those who now believe they might one
day accomplish "mind creation". Searle's CR does a reasonable job of
showing why the simplicities of strong AI won't support a genuinely
cognitive system; but it simultaneously illustrates, care of the
systems reply, how that might be done. The more cognitively oriented
connectionists are all chipping away at minor aspects of this, and
their hypotheses and models are not of the "strong AI" mold at all.
They do, however, retain the functional claim.

>If the Strong AI position is a strawman, so be it.  But then why the
>heck is everyone so upset about the Chinese Room?

Because folks like yourself and Mr. Green are using it to claim that
no system that includes a CR can ever have anything that it is to be
like. Since we clearly include CR-like capabilities, this is a
patently false claim, and one worth getting upset about.

-- paul

-- 
Computer Science Laboratory	  "truth is out of style" - MC 900ft Jesus
University of Washington 		<pauld@cs.washington.edu>


