From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!caen!sol.ctr.columbia.edu!bronze!chalmers Tue Jan 28 12:16:04 EST 1992
Article 3027 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!caen!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Virtual Person?
Message-ID: <1992Jan22.222309.3851@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan19.211715.9777@bronze.ucs.indiana.edu> <6025@skye.ed.ac.uk> <1992Jan22.213820.20784@cs.yale.edu>
Date: Wed, 22 Jan 92 22:23:09 GMT
Lines: 24

In article <1992Jan22.213820.20784@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:

>The problem is that there is no content to (2) except the intuition
>that Strong AI is wrong.

I disagree with this -- there are probably a lot of people out there who
have no prior opinion one way or the other about strong AI, but have
strong beliefs that if a person doesn't understand, then a system of
that person manipulating paper certainly doesn't.

Of course (2) *implies* that strong AI is wrong, and when you point this
out to those people they'll presumably accept that strong AI is wrong;
but that, in a sense, is the point of Searle's arguments: to ground
the rejection of strong AI in some prior intuition.

I agree that it's a *bad* argument, and that no real support is given
for the crucial premise (2), but an incomplete argument isn't the same
as a circular argument.  (Of course, it's pretty pointless arguing about
the precise sense in which a bad argument is bad...)

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


