From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Thu Jan 16 17:22:14 EST 1992
Article 2766 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Virtual Person?
Message-ID: <1992Jan16.054723.16068@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan10.213635.1884@cs.yale.edu> <5965@skye.ed.ac.uk> <1992Jan16.040733.23764@cs.yale.edu>
Date: Thu, 16 Jan 92 05:47:23 GMT
Lines: 25

In article <1992Jan16.040733.23764@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:

>It is devilishly hard to extract Searle's argument in a clear form.

To be fair, I think that no matter what confusing statements Searle
makes, there is at least the skeleton of a noncircular argument
there, i.e.

Premise 1: If strong AI is true, then there exists a program P such that
implementing P is sufficient for mentality.

Premise 2: Any program P can be implemented Chinese-room style, without
being accompanied by mentality.

Conclusion: Strong AI is false.

Of course, the crucial point is premise 2.  But to be fair again, I don't
think that Searle accepts premise 2 only because he thinks strong AI is
false.  I think that he thinks the idea that the Chinese room has
mentality is independently ridiculous.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


