From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew Tue Jan 21 09:27:11 EST 1992
Article 2891 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Re: Virtual Person?
Message-ID: <1992Jan19.132659.3061@cs.yale.edu>
Keywords: personal identity, searle
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: aden.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <5965@skye.ed.ac.uk> <1992Jan16.040733.23764@cs.yale.edu> <6007@skye.ed.ac.uk>
Date: Sun, 19 Jan 1992 13:26:59 GMT
Lines: 74

In article <6007@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

  In this
  reply, you address only the end of my article, and not the part where
  I explain what it was that I thought was question- begging.  Can I
  assume that you agree with the earlier part?  Or do you somehow think
  the later part shows I was wrong throughout?

The latter, I think.  Please don't ask me to recall who was begging
which question when!  Notice that I've skipped around in replying to
this message as well.

  Nor will you find me disagreeing with Dave Chalmers version

    Premise 1: If strong AI is true, then there exists a program P such
    that implementing P is sufficient for mentality.

    Premise 2: Any program P can be implemented Chinese-room style,
    without being accompanied by mentality.

    Conclusion: Strong AI is false.

  at least as a starting point.

Well, we've gone beyond this in our attempts at clarification, and I
would like to get your version.

  A further source of confusion would be if you agreed with Daryl
  McCullough that

    Strong AI is simply the claim that a machine with the right
    behavior must, therefore understand,

I don't.  At the risk of repeating Chalmers, let me point out how the
two claims could be different.  Let's use the label "process strong
AI" for the position that executing the right kind of program would
create a process that constituted a mind; and "behaviorist strong AI"
for McCullough's position.  

1. Process strong AI could be true without behaviorist strong AI, if
(a) the right kind of program would give rise to minds; (b) there were
other ways of getting the behavior that did *not* give rise to minds.
(E.g., there might be zombies you could grow by breeding silicon DNA
in tanks.)

2. Behaviorist strong AI could be true without process strong AI, if
(a) the right kind of behavior is always correlated with the
existence of a mind; (b) there is no program that can give rise to
this behavior.  (E.g., you *have* to use protoplasm to get the
behavior.) 

It would be interesting to get a version of Searle's argument that
starts by assuming behaviorist strong AI.  

  [This is from another post:]

  I suspect that part of what you're doing is to make one argument,
  which includes a response to the systems reply and the issue of
  multiple persons in one, out of the different arguments Searle
  presents.  The arguments appeared in a dialog.  Searle presented the
  Chinese Room, someone made the systems reply, and Searle answered
  that.  In my opinion, too many distortions are introduced by turning
  the dialog into a single argument.

This is an incredible concession, practically an admission that
Searle's logic depends on switching the pea from one shell to another
halfway through the "dialog."  Usually, an objection to an argument is
met by *amending the argument* to make it clearer.  At any given time
there is supposed to exist a version that meets all objections, and
avoids misunderstandings that give rise to objections.  Searle is
making a lot of hay off everyone's failure to force him to produce
such a version of his argument.

                                             -- Drew McDermott


