From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Apr  7 23:23:33 EDT 1992
Article 4854 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: The Chinese Room (or Number Five's Alive)
Organization: Department of Psychology, University of Toronto
References: <7341@uqcspe.cs.uq.oz.au> <1992Mar29.185454.21236@psych.toronto.edu> <493@tdatirv.UUCP>
Message-ID: <1992Apr1.030024.13504@psych.toronto.edu>
Date: Wed, 1 Apr 1992 03:00:24 GMT

In article <493@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1992Mar29.185454.21236@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>|Well, I will perhaps feel differently about this issue once I see AI types
>|worrying over the moral implications of unplugging their machines.  Until
>|*they* take this possibility seriously, I see no reason for me to.
>
>But I do take it (or rather the underlying question) seriously, that is
>really why I am participating in this discussion.  I am trying to develope
>an concept of how to tell when an artificial entity is sufficiently
>developed to need 'human rights'.

To be honest, I was being a bit flip.  If you followed the "thinking windstorm"
thread I started I while back, you might recall that one of my concerns was
the moral implications of functionalism, namely, if minds can *literally*
be all around us, what happens to ethics?  Must we treat a roomful of air
as a potential moral entity?  Note that what I am concerned with here are
entities that are not necessarily artificial, merely not biologically
self-contained.

Although I haven't thought it out entirely, it seems to me that to take      
functionalism as true is to require a radical rethinking of ethics.  When
literally all of creation (and all its possible permutations) are potential
moral agents, things get really weird...

As far as computers in specific are concerned, I don't have a good answer
with regard to their potential moral status.  The best initial approach would
be to determine what features humans possess that make them moral agents, and
see if computers (running the appropriate software) possess them.  I don't have
a good handle on what features *would* be relevant.  Keep in mind that,
even if one accepts the Chalmersian position that functional states are somehow
associated with qualia, we have no idea what computers might actually *feel*,
if anything.  This makes it rather difficult to make utilitarian evaluations.


My comments with regard to AI researchers' treatment of their machines was
meant only partly in jest.  As I noted earlier, I believe that functionalism
has radical implications for ethics.  However, I don't believe that any
AI researchers take their work to have *any* moral relevance, and thus, I
have a hard time believing that they actually *believe* what they claim.
Certainly if SHRDLU had a mind, it would at least be an open question as
to whether erasing the program had some moral import.

>
>However, *ungluggin* such a machine would probably not 'kill' it, most
>computer now are quite capable of rebooting, and everything except the
>contents of main memory (short-term memory) is invariably stable.  Thus
>this is more like giving the computer a mickey-fin.  (When a humans are
>knocked out they tend to lose short-term memory contents too).
>
>To kill it you would need to destroy the disks and burn the back-ups.

Yeah, yeah, I know.  I was striving for brevity.


- michael




