From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!tdatirv!sarima Tue Apr  7 23:23:23 EDT 1992
Article 4835 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: The Chinese Room (or Number Five's Alive)
Message-ID: <493@tdatirv.UUCP>
Date: 31 Mar 92 01:34:57 GMT
References: <7341@uqcspe.cs.uq.oz.au> <1992Mar29.185454.21236@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 21

In article <1992Mar29.185454.21236@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|Well, I will perhaps feel differently about this issue once I see AI types
|worrying over the moral implications of unplugging their machines.  Until
|*they* take this possibility seriously, I see no reason for me to.

But I do take it (or rather the underlying question) seriously, that is
really why I am participating in this discussion.  I am trying to develope
an concept of how to tell when an artificial entity is sufficiently
developed to need 'human rights'.

However, *ungluggin* such a machine would probably not 'kill' it, most
computer now are quite capable of rebooting, and everything except the
contents of main memory (short-term memory) is invariably stable.  Thus
this is more like giving the computer a mickey-fin.  (When a humans are
knocked out they tend to lose short-term memory contents too).

To kill it you would need to destroy the disks and burn the back-ups.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



