From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra Tue Jan 21 09:26:32 EST 1992
Article 2818 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!cannelloni.cis.ohio-state.edu!chandra
>From: chandra@cannelloni.cis.ohio-state.edu (B Chandrasekaran)
Subject: Re: Virtual Person?
Message-ID: <1992Jan17.042912.20742@cis.ohio-state.edu>
Sender: news@cis.ohio-state.edu (NETnews        )
Organization: The Ohio State University, Department of Computer and Information Science
References: <1992Jan16.054723.16068@bronze.ucs.indiana.edu> <1992Jan16.125322.25008@cis.ohio-state.edu> <1992Jan16.183647.19319@bronze.ucs.indiana.edu>
Date: Fri, 17 Jan 1992 04:29:12 GMT
Lines: 46

In article <1992Jan16.183647.19319@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Jan16.125322.25008@cis.ohio-state.edu> chandra@boa.cis.ohio-state.edu (B Chandrasekaran) writes:
>
>>This can't be the argument.  Premise 2 leaves open the possibility that
>>the program P can also be implemented non-Chineseroom style with the 
>>possibility of mentality, so the conclusion does not necessarily follow.
>>Premise 2 should be:
>>
>>	Any program P can be analyzed Chinese-room style and thus shown not to
>>	have mentality.  
>>
>>Now conclusion will follow.
>
>I think I had it right the first time.  Searle doesn't want to show
>that *no* implementation of a program can have mentality (e.g. he thinks
>it's possible that the brain may implement innumerable programs).  He
>wants to show that implementing a given program cannot be *sufficient* for
>mentality -- i.e. that there doesn't exist a program P such that any
>implementation of P would have mentality.  So exhibiting a single
>implementation of any given P that lacks mentality is enough to
>establish his case.

  I think that the problem might be somewhat different uses of the
word program.  I use it as a set of instructions that are interpreted
by an appropriate Turing Machine, i.e., something like the book of
instructions in the CR.  No additional causal powers are assumed. You
use the word "program" as something that can be implemented by the kind
of brain that Searle assumes, namely something that adds causal powers
of a certain sort.  I agree that with your interpretation of "program"
your formulation of his argument works.

My version of your restatement of Searle's argument is as follows:

Premise 1: If strong AI is true, then there exists a program P such that
implementing P on a Turing Machine is sufficient for mentality.

Premise 2:  Any such program P can be analyzed Chinese-room style and thus 
shown not to have mentality.  

Will you agree with this version?








