From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!zaphod.mps.ohio-state.edu!caen!news.cs.indiana.edu!bronze!chalmers Tue Jan 21 09:27:05 EST 1992
Article 2879 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!zaphod.mps.ohio-state.edu!caen!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Virtual Person?
Message-ID: <1992Jan18.221424.23190@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan16.125322.25008@cis.ohio-state.edu> <1992Jan16.183647.19319@bronze.ucs.indiana.edu> <1992Jan17.042912.20742@cis.ohio-state.edu>
Date: Sat, 18 Jan 92 22:14:24 GMT
Lines: 38

In article <1992Jan17.042912.20742@cis.ohio-state.edu> chandra@cannelloni.cis.ohio-state.edu (B Chandrasekaran) writes:

>  I think that the problem might be somewhat different uses of the
>word program.  I use it as a set of instructions that are interpreted
>by an appropriate Turing Machine, i.e., something like the book of
>instructions in the CR.  No additional causal powers are assumed. You
>use the word "program" as something that can be implemented by the kind
>of brain that Searle assumes, namely something that adds causal powers
>of a certain sort.  I agree that with your interpretation of "program"
>your formulation of his argument works.

Well, this depends on what you mean by causal powers -- if you mean,
as Searle does, the power to cause a mind, then the above would be more
or less question-begging.  If you mean simply that the system should have
no physical causal structure over and above what's specified by the program,
then that would seem to be hard to achieve, as there's always going to be
extra causal structure e.g. at the molecular level.  So I'd prefer to
leave "causal powers" out of it altogether.

>My version of your restatement of Searle's argument is as follows:
>
>Premise 1: If strong AI is true, then there exists a program P such that
>implementing P on a Turing Machine is sufficient for mentality.
>
>Premise 2:  Any such program P can be analyzed Chinese-room style and thus 
>shown not to have mentality.  
>
>Will you agree with this version?

Well, I don't think that programs are the right kind of thing to have
mentality -- rather, implementations of programs are.  If you changed
Premise 2 to "thus shown to be such that implementing P is
insufficient for mentality", I'd agree with you.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


