From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:27:38 EST 1992
Article 2942 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Keywords: personal identity, searle
Message-ID: <6027@skye.ed.ac.uk>
Date: 21 Jan 92 00:21:11 GMT
References: <5965@skye.ed.ac.uk> <1992Jan16.040733.23764@cs.yale.edu> <6007@skye.ed.ac.uk> <1992Jan19.132659.3061@cs.yale.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 103

In article <1992Jan19.132659.3061@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>In article <6007@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>  In this
>  reply, you address only the end of my article, and not the part where
>  I explain what it was that I thought was question- begging.  Can I
>  assume that you agree with the earlier part?  Or do you somehow think
>  the later part shows I was wrong throughout?
>
>The latter, I think.  Please don't ask me to recall who was begging
>which question when!

Ok, let's just drop it, unless it comes up again.

>  Nor will you find me disagreeing with Dave Chalmers version

>Well, we've gone beyond this in our attempts at clarification, and I
>would like to get your version.

I find it difficult to keep up with the voluime of News, so there are
still lots of messages I haven't seen yet.  Also, it often happens
that a number of messages can go back and forth in the US before any
of them get to the UK.

>  A further source of confusion would be if you agreed with Daryl
>  McCullough that
>
>    Strong AI is simply the claim that a machine with the right
>    behavior must, therefore understand,
>
>I don't.  At the risk of repeating Chalmers, let me point out how the
>two claims could be different.  Let's use the label "process strong
>AI" for the position that executing the right kind of program would
>create a process that constituted a mind; and "behaviorist strong AI"
>for McCullough's position.  
>
>1. Process strong AI could be true without behaviorist strong AI, if
>(a) the right kind of program would give rise to minds; (b) there were
>other ways of getting the behavior that did *not* give rise to minds.
>(E.g., there might be zombies you could grow by breeding silicon DNA
>in tanks.)

(Or using the wrong kind of program, perhaps?)

>2. Behaviorist strong AI could be true without process strong AI, if
>(a) the right kind of behavior is always correlated with the
>existence of a mind; (b) there is no program that can give rise to
>this behavior.  (E.g., you *have* to use protoplasm to get the
>behavior.) 

Well, great!  I agree with all of that.

>It would be interesting to get a version of Searle's argument that
>starts by assuming behaviorist strong AI.  

Another 2.b. would be if programs failed for Dreyfus(?sp)-like
reasons.

>  [This is from another post:]
>
>  I suspect that part of what you're doing is to make one argument,
>  which includes a response to the systems reply and the issue of
>  multiple persons in one, out of the different arguments Searle
>  presents.  The arguments appeared in a dialog.  Searle presented the
>  Chinese Room, someone made the systems reply, and Searle answered
>  that.  In my opinion, too many distortions are introduced by turning
>  the dialog into a single argument.
>
>This is an incredible concession, practically an admission that
>Searle's logic depends on switching the pea from one shell to another
>halfway through the "dialog." 

I didn't mean it to sound like that.  I think there's a fairly
clear argument before he starts considering the various replies.
I even think the argument works better against the replies than
might be thought from the fact that Searle presents additional
arguments in response.  

Then, there are a number of different ways to combine everything
into one.  I shouldn't have implied that distortions were inherent
in any such attempt, but I do think it's difficult to do justice
to the dialogue.  It's not for nothing, after all, that so many
philosophers have resorted to dialogues.

> Usually, an objection to an argument is
>met by *amending the argument* to make it clearer.  

But _some_ objections don't really require a change in the
argument.  For example, if there's some question-begging, then
there's no need to change the argument in order to rule it out.
Or imagine, if you will, that whenever someone presented an argument
on the net they were expected to change it in respose to every
disagreement.

>At any given time
>there is supposed to exist a version that meets all objections, and
>avoids misunderstandings that give rise to objections.  Searle is
>making a lot of hay off everyone's failure to force him to produce
>such a version of his argument.

You may well be right.

-- jd


