From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima Tue Apr  7 23:23:53 EDT 1992
Article 4890 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: The Chinese Room (or Number Five's Alive)
Message-ID: <500@tdatirv.UUCP>
Date: 2 Apr 92 22:04:51 GMT
References: <7341@uqcspe.cs.uq.oz.au> <1992Mar29.185454.21236@psych.toronto.edu> <493@tdatirv.UUCP> <1992Apr1.030024.13504@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 51

In article <1992Apr1.030024.13504@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|In article <493@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
|To be honest, I was being a bit flip.  If you followed the "thinking windstorm"
|thread I started I while back, you might recall that one of my concerns was
|the moral implications of functionalism, namely, if minds can *literally*
|be all around us, what happens to ethics?  Must we treat a roomful of air
|as a potential moral entity?  Note that what I am concerned with here are
|entities that are not necessarily artificial, merely not biologically
|self-contained.

I do not think that 'functionalism' necessarily has the implications you
suggest.  I do not believe that it is possible for a storm or a room
full of air to posses the necessary functional relationships to meet
my criteria for 'intelligence'.

Basicly I reject the 'rock can implement any FSA' argument.  I have not
posted on it because I have not really had anything new to add, I agree
with the opposition based on incorrect handling of counterfactuals.

Thus, I do not see how a fundamentally unstructured mass can meet the
functional requirements of a thinking system.

|Although I haven't thought it out entirely, it seems to me that to take      
|functionalism as true is to require a radical rethinking of ethics.  When
|literally all of creation (and all its possible permutations) are potential
|moral agents, things get really weird...

True, but I do not see that this is a real, or even likely, conclusion,
so I do not now concern myself with it.

|As far as computers in specific are concerned, I don't have a good answer
|with regard to their potential moral status.  The best initial approach would
|be to determine what features humans possess that make them moral agents, and
|see if computers (running the appropriate software) possess them.

I agree with this, I have long supported an 'experimental' approach to
the AI problem.
|
|My comments with regard to AI researchers' treatment of their machines was
|meant only partly in jest.  As I noted earlier, I believe that functionalism
|has radical implications for ethics.  However, I don't believe that any
|AI researchers take their work to have *any* moral relevance, and thus, I
|have a hard time believing that they actually *believe* what they claim.

My tendency would be to treat any machine that truly acted intelligent
as being worthy of respect, and I would tend to take its word for how
it felt about things unless I had evidence to the contrary.

-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


