From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!swrinde!mips!mips!munnari.oz.au!bunyip.cc.uq.oz.au!uqcspe!cs.uq.oz.au!matthew Tue Apr  7 23:22:58 EDT 1992
Article 4792 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!swrinde!mips!mips!munnari.oz.au!bunyip.cc.uq.oz.au!uqcspe!cs.uq.oz.au!matthew
>From: matthew@cs.uq.oz.au (Matthew McDonald)
Newsgroups: comp.ai.philosophy
Subject: The Chinese Room (or Number Five's Alive)
Message-ID: <7341@uqcspe.cs.uq.oz.au>
Date: 29 Mar 92 03:12:51 GMT
Sender: news@cs.uq.oz.au
Reply-To: matthew@cs.uq.oz.au
Organization: Psychology Department, The University of Queensland, Australia
Lines: 60

It seems to me that all of this discussion re: The Chinese Room
is missing the fundamental point although most of it is reasonably
interesting. Rather than carrying on one of the threads about semantics,
FSA and rocks, I'd like to ask Searle's co-religionists a different 
(although fairly old) question.

Suppose rather than thinking about a chinese room, we imagine artificial
people. Artificial people don't seem any more difficult to build
than chinese rooms (at least to me) and implementation difficulties
are basically irrelevant anyway.

Ok - suppose we work out how to build machines that behave in basically
the same way as people do.

	They have a body to move around in. They have limbs and sense organs.
They can pick up books and read, they can talk intelligently. They
(behave like they) don't like being damaged. They act rude if you're too rude
to them. They act vain sometimes ... They act like some people are their
friends. They act like they don't like other people. Basically, they act
the same way humans do. They may do a bit better on IQ tests, they may do
sums very quickly. But basically, they do all the sorts of things that real
people do. You get the picture.

	These artificial people could be very useful. It seems likely
that they would be treated in much the same way as slaves were once
treated. You wouldn't treat them too badly, because they're expensive.
You'd try not to hurt their feelings too badly, because this would mean
you didn't get much work out of them. But hey, you paid for them, so
they're basically yours to do with as you please. They might be useful
for very high risk physical work. I'm sure they'd have *some* uses.

	Now suppose that after a while these "artificial people" start
to act as though they don't like being slaves. They start to draw
parallels between themselves and groups who were used as slaves in the
past. After all, if they really act much like people, they're going to
be smart enough to spot the analogy.

	Now most people won't care at all for quite a while. After all,
there are still an incredible number of people who still think that
anyone with skin darker than their's is inferior. If the differences
go deeper than culture and skin colour, people are likely to take
much longer to accept the other "people" as having intrinsic worth.

	But surely that couldn't go on forever. After a few thousand years
at most, people would get used to machines that behave as though they
are people and begin to see them as people. And then the artificial people
get some kind of equal treatment. Around this point, I'd expect people
to start seeing the artificial as real people and Searle as a bigot.

	Apart from implementation difficulties, what's wrong with the
picture? If you honestly believe Searle's story about the chinese room,
how would you know that the artificial people didn't have feelings too?

	To say that things that act like people aren't neccessarily
people is (essentially) solipsism. Can anyone who has philosophical
objections to strong AI point out to me why their position is different
to solipsism?

Best Wishes,
	Matthew.


