From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uwm.edu!linac!uchinews!spssig!markrose Mon Dec 16 11:01:30 EST 1991
Article 2080 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uwm.edu!linac!uchinews!spssig!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Searle and the Chinese Room
Message-ID: <1991Dec13.011907.42188@spss.com>
Date: Fri, 13 Dec 1991 01:19:07 GMT
References: <gdCb=YW00UhWQ2lpNp@andrew.cmu.edu> <YAMAUCHI.91Dec5040116@heron.cs.rochester.edu> <1991Dec5.191043.10565@psych.toronto.edu>
Nntp-Posting-Host: spssrs20.spss.com
Organization: SPSS, Inc.
Lines: 13

In article <1991Dec5.191043.10565@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>It seems to me that, unless strong AI proponents can provide a coherent
>explanation of why Searle's logical argument fails, the field as a whole
>rests on a profound misunderstanding.

In Searle's own terms, researchers can concentrate on weak AI rather than
strong AI.  Searle has nothing to say against simulating what the mind
does or how the brain works.  In fact I can't see any difference in
practice between strong AI and weak AI until we actually have a successful
mind simulation in place.  (As Drew McDermott says, to accomplish this
we will probably need a successful theory of the mind, too, and we will
then use that, rather than Searle's arguments or the Turing Test, to 
decide what intelligence is.)


