From newshub.ccs.yorku.ca!torn!utcsri!rutgers!sun-barr!ames!haven.umd.edu!darwin.sura.net!sgiblab!a2i!pagesat!spssig.spss.com!markrose Thu Oct  8 10:11:18 EDT 1992
Article 7121 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!sun-barr!ames!haven.umd.edu!darwin.sura.net!sgiblab!a2i!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct5.181741.7241@spss.com>
Date: 5 Oct 92 18:17:41 GMT
References: <1992Oct2.185539.2953@meteor.wisc.edu> <1992Oct2.202342.16039@spss.com> <1992Oct5.022907.6131@meteor.wisc.edu>
Sender: news@spss.com (Net News Admin)
Organization: SPSS, Inc.
Lines: 127

In article <1992Oct5.022907.6131@meteor.wisc.edu> tobis@meteor.wisc.edu 
(Michael Tobis) writes:
>If I am an algorithm, and you remove neurons essential to the implementation
>of the algorithm, I don't know the answer to your question. Since I can't
>imagine that I am an algorithm, I can't imagine what answer you propose.
>
>Does your system lose its putative consciousness 1) when the wrong rules are 
>substituted 2) when the wrong rules are implemented 3) when the wrong 
>result is output or 4) not at all, regardless of how wrong the algorithm is?

Let's try to put this in perspective.  In a truly astonishing mismatch of 
hardware to software, we have chosen to execute an enormously complicated 
AI program on the single-processor, 0.1-flops processor consisting of 
John Searle in a room.  Consider a single question and answer, which require
perhaps a billion instructions and offer Searle steady employment for many 
years.  

The system exhibits consciousness, if at all, on this glacial time-scale, 
not on Searle's.  A few mistakes by Searle, in the course of these long 
years, do not affect the consciousness of the system (at least if the 
algorithm is at least as robust as a human brain is).  The question of
whether the system is conscious at any one moment, or is conscious at the
moment Searle is making a mistake, is stricly comparable to the question
of whether a single neuron (perhaps misfiring) is conscious.

>Either all procedures are conscious or only some procedures are conscious.
>My question is whether you believe an arbitrary algorithm, to which no 
>meaning can plausibly be ascribed, is conscious, or whether only certain 
>ones are, and how you can distinguish between the two types if not. What 
>about an algorithm which responds appropriately to any Chinese input 
>unless the output is ready on an even numbered clock tick, in which case it 
>ersponds with gibberish? Is it half conscious?

In my opinion, only some algorithms could be described as conscious.
Conscious algorithms are those whose programming explicitly supports 
activities we describe as conscious, such as self-analysis and self-
simulation; I think talk about consciousness emerging spontaneously
out of complexity is nonsense.  Your machine that spouts gibberish half
the time may or may not meet this definition, which makes no reference
to external behavior.

>Only the tenacious insistence that intelligent function is identical to
>experience allows one to insist that me following rules I don't understand 
>creates a conscious entity,while me following rules I do understand does not.

Huh?  If you are Searle in a room, consciousness occurs when you execute
an algorithm which implements consciousness, as described above-- whether
you understand the rules or not.  Searle (speaking now of the professor, not
the processor) has confused things enormously by describing a computer
containing a human processor; it leads to a foolish confusion between the 
consciousness (or intelligence or understanding) or the processor and that of 
the system.  Real computers all have certifiably stupid, unconscious CPUs.

>But I think it's no trick; it's not the slowness of the me+rules system
>that bothers me- it's its structure. I cannot envision a plausible theory
>of subjective consciousness that could allow it to arise from such a system.

Presumably this means you have in hand "a plausible theory of subjective
consciousness" that derives from some other theoretical principles?  If not
this is no criticism of the theory underlying AI.

>>>Now, it is proposed that successful implementation of a system whose
>>>_design objective_ is to _pass our intuitive tests_, (dressed up with 
>>>Turing's name to give it a certain official credibility) is indeed 
>>>conscious. That is to say, the assumption is that our intuition is 
>>>infallible.
>>
>>By no means.  The assumption behind the test is that *no better test* of
>>consciousness is presently available.  Turing's original proposal involved
>>repeated test iterations compared statistically, which assumes fallibility,
>>not infallibility, of our intuitions.  And the purpose of AI is to develop
>>artificial intelligence, not to pass the Turing Test.
>
>I am convinced that no better test than the Turing Test is possible, because
>there is no plausible objective measure for the presence or absence of
>subjective phenomena. I have no doubt that many in AI would consider a
>implementation that passed the Turing Test as often as a human as
>a great success. Given that such a construct is a goal that some have in
>mind, reaching that goal would be inadequate evidence of the existence
>of the phenomenon of subjective consciousness implicitly being searched for.

I agree, but only because I think the Turing Test is provisional and
unsatisfactory.  However, your comments don't really address my point.
Yes, lots of folks, including me, would be excited by a program that passes 
the Turing Test; this does not prove either of your claims, that the test is 
the goal of AI research, or that AI researchers believe "our intuition is 
infallible."

>>Drew McDermott posted an interesting article about this last year,
>>in which he pointed out the Turing Test is only a stopgap in the absence
>>of a real theory of intelligence, and that to actually construct an AI
>>will require working out such a theory-- and once we have one we can, with
>>much relief, throw out the Turing Test and use the theory instead.
>
>I cannot see how such a theory can be verified, even if valid. Would
>you use the Turing Test? How can you verify any objective theory of
>subjective consciousness? 

It's hard to say without having the theory in hand, isn't it?  But to verify
any theory you look for testable consequences, and test them.  These need 
not be exclusively external; the successful theory should explain not only
human behavior but human neurology.

>I note you are again using AI as synonymous with
>aritificial consciousness. This unquestioned assumption is imho incorrect
>and unjustifiable, and is in any case unverified and probably unverifiable.

I don't see them as synonymous at all.  I think artificial consciousness
has even hardly been attempted yet.  Why this is I don't know; the charitable
assumption is that researchers have been more interested in intelligence.

>We have reached the point where we have a great deal of 
>influence over the future of our planet, and some AI workers explicitly
>believe that human dominance will, in fairly short order, be supplanted
>by dominance of "artificial life", and some of them relish the prospect.
>
>In the absence of a clear theory of consciousness, I think this attitude is
>the single most dangerous idea I have ever heard of. Unfortunately, it
>is not me who has gone off the deep end here. See the current issue of
>Whole Earth Review, on the topic of Artifical Life, for discussions with
>people who are so convinced that artificial consciousness is identical with
>AI, that they are willing to risk the future of humanity and the planet
>on that belief.

If this is the most dangerous idea you can think of, I think you need
to broaden your reading a bit.  You might start with Stanislaw Lem's
_Cyberiad_, which might reconcile you to an all-robotic world... 


