From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:16:11 EST 1992
Article 3034 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Virtual Person?
Message-ID: <1992Jan22.224344.7404@aisb.ed.ac.uk>
Date: 22 Jan 92 22:43:44 GMT
References: <1992Jan18.222329.23953@bronze.ucs.indiana.edu> <6028@skye.ed.ac.uk> <1992Jan22.195657.19911@bronze.ucs.indiana.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 23

In article <1992Jan22.195657.19911@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <6028@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>>I don't think the virtual person works all that well in any case.
>>If the VP is in Searle's head (as when he memorizes the program),
>>then perhaps it's benefiting from the "causal powers" of Searle's
>>brian.
>
>That's Searle's problem, not AI's.  It's Searle who wants to
>argue that the VP doesn't understand.

I don't follow you.  Searle argues about whether things understand
merely by running the right program.  If the VP needs the "causal
powers" (if, we might say, Chinese Rooms understand only because
there's a person in them), then strong AI would still be in trouble,
even though Searle's argument might need adjusting.

Besides, there are lots of problems with the virtual person idea.
For instance: Would it be murder for Searle to forget the program?
Maybe these aren't problems that refute strong AI, but you do have
to be careful when you start talking about "persons".  Certainly
the claim that a new person would be created seems a bit strong.
I'd be happier if the sys reply could get by with someting weaker.


