From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!convex!news.oc.com!spssig.spss.com!markrose Thu Oct  8 10:11:05 EDT 1992
Article 7101 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!convex!news.oc.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct2.202342.16039@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Oct1.210056.13084@meteor.wisc.edu> <BvI81J.92B@gpu.utcs.utoronto.ca> <1992Oct2.185539.2953@meteor.wisc.edu>
Date: Fri, 2 Oct 1992 20:23:42 GMT
Lines: 78

In article <1992Oct2.185539.2953@meteor.wisc.edu> tobis@meteor.wisc.edu 
(Michael Tobis) writes:
>In article <BvI81J.92B@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca 
(Andrzej Pindor) writes:
>>>Do I kill an intelligent being by getting bored and deciding not to follow
>>>the rules?
>>
>>If the physico-chemical processes in your brain stopped, you would be dead,
>>do you agree? Why is the above so ridiculous then?
>
>I find it hard
>to believe that you think a human is a single entity and a human who has
>decided to follow rules he doesn't understand is two. What if a page of
>the rules is substituted by a page which is incorrect? Does the "entity"
>"die" when the pages are swapped, or only when I attempt to implement
>rules which should be on that page?

I'm surprised you can't answer this yourself.  Do you die if a few neurons
are destroyed in your head?

The plausibility of Searle's argument depends partly on a trick: the tempo
of the instructions in the room.  It certainly *seems* strange to think
that consciousness could depend on such an insubstantial and ponderous
arrangement as Searle sitting in a room executing rules.  But really, how
could we possibly believe that the *speed* of cognition has anything to
do with consciousness?  From a computer's viewpoint, after all, the human
brain works with astonishing slowness.  Yet we still think ourselves
conscious.

Speed up Searle, so he is executing a thousand instructions per second.
Bring in several thousand of his friends, so that the Chinese Room contains
thousands of individual processors.  The intuitive force of the argument,
I think, collapses-- there are no grounds to say that the system of all the
processors together is not conscious.  All we can say is that the conscious-
ness does not reside in any one processor-- but that does not matter;
consciousness does not reside in single neurons, either.

>>>Summarizing: It seems to me that you are proposing that the human +algorithm 
>>>has an experience separate from that of the human alone, which I find
>>>an extremely dubious proposition. 
>>
>>Why? Could you give a reason? 
>
>No, but neither can you. I have my hunches and you have yours, but it is
>precisely my point that neither hunch is testable because experience is
>not verifiable in an objective way. I have my experience, and cannot prove
>it to you; you presumably have the same situation. We believe each other to
>be conscious entities because of our intuition, not because of anything
>that can be called objective evidence.

Then on your own showing there isn't much reason to deny civil rights to
intelligent robots-- or to aliens-- or to humans for that matter.  Some of
us are a bit more optimistic about our ability to speak truthfully
about the world.

>Now, it is proposed that successful implementation of a system whose
>_design objective_ is to _pass our intuitive tests_, (dressed up with Turing's
>name to give it a certain official credibility) is indeed conscious. That
>is to say, the assumption is that our intuition is infallible.

By no means.  The assumption behind the test is that *no better test* of
consciousness is presently available.  Turing's original proposal involved
repeated test iterations compared statistically, which assumes fallibility,
not infallibility, of our intuitions.  And the purpose of AI is to develop
artificial intelligence, not to pass the Turing Test.

>That's an apallingly flimsy premise to base the entire future of evolution on.

You seem to veer off the deep end here.  For one thing, very little in AI
is "based" on the Turing Test.  Many AI researchers distrust or explicitly
reject it.  Drew McDermott posted an interesting article about this last year,
in which he pointed out the Turing Test is only a stopgap in the absence
of a real theory of intelligence, and that to actually construct an AI
will require working out such a theory-- and once we have one we can, with
much relief, throw out the Turing Test and use the theory instead.
For another, evolution does not proceed based on the theories, however 
misguided, set up by the creatures subject to it.  You might as well try
to influence a Bandersnatch.


