From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!uwm.edu!daffy!uwvax!meteor!tobis Thu Oct  8 10:11:13 EDT 1992
Article 7115 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!uwm.edu!daffy!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct5.022907.6131@meteor.wisc.edu>
Organization: University of Wisconsin, Meteorology and Space Science
References: <BvI81J.92B@gpu.utcs.utoronto.ca> <1992Oct2.185539.2953@meteor.wisc.edu> <1992Oct2.202342.16039@spss.com>
Date: Mon, 5 Oct 92 02:29:07 GMT
Lines: 138

In article <1992Oct2.202342.16039@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Oct2.185539.2953@meteor.wisc.edu> tobis@meteor.wisc.edu 
>(Michael Tobis) writes:
>>In article <BvI81J.92B@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca 
>(Andrzej Pindor) writes:
>>>>Do I kill an intelligent being by getting bored and deciding not to follow
>>>>the rules?

>>I find it hard
>>to believe that you think a human is a single entity and a human who has
>>decided to follow rules he doesn't understand is two. What if a page of
>>the rules is substituted by a page which is incorrect? Does the "entity"
>>"die" when the pages are swapped, or only when I attempt to implement
>>rules which should be on that page?

>I'm surprised you can't answer this yourself.  Do you die if a few neurons
>are destroyed in your head?

If I am an algorithm, and you remove neurons essential to the implementation
of the algorithm, I don't know the answer to your question. Since I can't
imagine that I am an algorithm, I can't imagine what answer you propose.

Does your system lose its putative consciousness 1) when the wrong rules are 
substituted 2) when the wrong rules are implemented 3) when the wrong 
result is output or 4) not at all, regardless of how wrong the algorithm is?

Either all procedures are conscious or only some procedures are conscious.
My question is whether you believe an arbitrary algorithm, to which no meaning
can plausibly be ascribed, is conscious, or whether only certain ones are, and 
how you can distinguish between the two types if not. What about an algorithm
which responds appropriately to any Chinese input unless the output is ready on
an even numbered clock tick, in which case it ersponds with gibberish? Is it
half conscious?

Only the tenacious insistence that intelligent function is identical to
experience allows one to insist that me following rules I don't understand 
creates a conscious entity, while me following rules I do understand does not.

>The plausibility of Searle's argument depends partly on a trick: the tempo
>of the instructions in the room.  It certainly *seems* strange to think
>that consciousness could depend on such an insubstantial and ponderous
>arrangement as Searle sitting in a room executing rules.  But really, how
>could we possibly believe that the *speed* of cognition has anything to
>do with consciousness?  From a computer's viewpoint, after all, the human
>brain works with astonishing slowness.  Yet we still think ourselves
>conscious.

I will stipulate that if somehow intelligence is purely algorithmic, it is
unlikely that the speed of the implementation could matter.

But I think it's no trick; it's not the slowness of the me+rules system
that bothers me- it's its structure. I cannot envision a plausible theory
of subjective consciousness that could allow it to arise from such a system.

>Then on your own showing there isn't much reason to deny civil rights to
>intelligent robots-- or to aliens-- or to humans for that matter.  Some of
>us are a bit more optimistic about our ability to speak truthfully
>about the world.

I agree with your latter statement - most scientifically literate people
are more optimistic than I regarding our ability to integrate consciousness
into our systems. As for the former statement - my reason is self defense.
Granting rights to humans seems to just barely work out, or perhaps just 
barely fail, and we know at least that the set of conscious humans is 
nonempty. Any alternative approach seems profoundly dangerous, at least 
regarding our constructs. (I withhold judgement on the aliens until 
I meet them.)

>>Now, it is proposed that successful implementation of a system whose
>>_design objective_ is to _pass our intuitive tests_, (dressed up with Turing's
>>name to give it a certain official credibility) is indeed conscious. That
>>is to say, the assumption is that our intuition is infallible.

>By no means.  The assumption behind the test is that *no better test* of
>consciousness is presently available.  Turing's original proposal involved
>repeated test iterations compared statistically, which assumes fallibility,
>not infallibility, of our intuitions.  And the purpose of AI is to develop
>artificial intelligence, not to pass the Turing Test.

I am convinced that no better test than the Turing Test is possible, because
there is no plausible objective measure for the presence or absence of
subjective phenomena. I have no doubt that many in AI would consider a
implementation that passed the Turing Test as often as a human as
a great success. Given that such a construct is a goal that some
have in mind, reaching that goal would be inadequate evidence of the existence
of the phenomenon of subjective consciousness implicitly being searched for.

>>That's an apallingly flimsy premise to base the entire future of evolution on.

>You seem to veer off the deep end here.  

I can see why you might think so, but see below.

>For one thing, very little in AI
>is "based" on the Turing Test.  Many AI researchers distrust or explicitly
>reject it.  Drew McDermott posted an interesting article about this last year,
>in which he pointed out the Turing Test is only a stopgap in the absence
>of a real theory of intelligence, and that to actually construct an AI
>will require working out such a theory-- and once we have one we can, with
>much relief, throw out the Turing Test and use the theory instead.

I cannot see how such a theory can be verified, even if valid. Would
you use the Turing Test? How can you verify any objective theory of
subjective consciousness? I note you are again using AI as synonymous with
aritificial consciousness. This unquestioned assumption is imho incorrect
and unjustifiable, and is in any case unverified and probably unverifiable.

Unfortunately, many AI people steadfastly refuse to see the distinction.

>For another, evolution does not proceed based on the theories, however 
>misguided, set up by the creatures subject to it.  You might as well try
>to influence a Bandersnatch.

Evolution is little affected by our theories, but its course is very
sensitive to our behavior.

We have reached the point where we have a great deal of 
influence over the future of our planet, and some AI workers explicitly
believe that human dominance will, in fairly short order, be supplanted
by dominance of "artificial life", and some of them relish the prospect.

In the absence of a clear theory of consciousness, I think this attitude is
the single most dangerous idea I have ever heard of. Unfortunately, it
is not me who has gone off the deep end here. See the current issue of
Whole Earth Review, on the topic of Artifical Life, for discussions with
people who are so convinced that artificial consciousness is identical with
AI, that they are willing to risk the future of humanity and the planet
on that belief.

I think it is likely that our creations may be more adaptive than we,
while still lacking subjective experience. If they triumph over us, we
will have allowed nonlife to triumph over life. Are you so confident that
intelligence is identical to experience that you are willing to take an
infinite risk? Even if you are correct, it is not clear that the payoff
is positive!

mt



