From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!uwm.edu!daffy!uwvax!meteor!tobis Wed Oct 14 14:58:06 EDT 1992
Article 7162 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!uwm.edu!daffy!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct8.174224.20547@meteor.wisc.edu>
Organization: University of Wisconsin, Meteorology and Space Science
References: <1992Oct2.202342.16039@spss.com> <1992Oct5.022907.6131@meteor.wisc.edu> <1992Oct5.181741.7241@spss.com>
Date: Thu, 8 Oct 92 17:42:24 GMT
Lines: 163

In article <1992Oct5.181741.7241@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Oct5.022907.6131@meteor.wisc.edu> tobis@meteor.wisc.edu 
>(Michael Tobis) writes:

>Let's try to put this in perspective.  In a truly astonishing mismatch of 
>hardware to software, we have chosen to execute an enormously complicated 
>AI program on the single-processor, 0.1-flops processor consisting of 
>John Searle in a room.  Consider a single question and answer, which require
>perhaps a billion instructions and offer Searle steady employment for many 
>years.  

If consciousness is purely algorithmic, then surely the rate of implementation
of the algorithm doesn't matter. If there is more to it (grounding,
neurophysiology, quanta, or even something as yet unidentified) then the
rate may be significant, but on the hypothesis that consciousness arises
from formal symbol manipulations, it cannot.

>The system exhibits consciousness, if at all, on this glacial time-scale, 
>not on Searle's.  A few mistakes by Searle, in the course of these long 
>years, do not affect the consciousness of the system (at least if the 
>algorithm is at least as robust as a human brain is).  The question of
>whether the system is conscious at any one moment, or is conscious at the
>moment Searle is making a mistake, is stricly comparable to the question
>of whether a single neuron (perhaps misfiring) is conscious.

Well, I think that's a good question! If what the neurons are doing is
just implementing an algorithm, the algorithm can by Turing be reduced to
a sequence of trivial operations, comparable to a neuron firing, which
cannot plausibly be individually considered a conscious process. You take
this as evidence of an "emergent property" while I take it as evidence
that understanding of consciousness is incomplete.

>Conscious algorithms are those whose programming explicitly supports 
>activities we describe as conscious, such as self-analysis and self-
>simulation; I think talk about consciousness emerging spontaneously
>out of complexity is nonsense.  Your machine that spouts gibberish half
>the time may or may not meet this definition, which makes no reference
>to external behavior.

Well, I am agreed that a definition of consciousness SHOULD make no
reference to external behavior. I am unconvinced that an objective measure
of such a phenomenon is possible, though, which is why I suspect that
the project of artificial consciousness is unlikely to be demonstrably
realized.

>>Only the tenacious insistence that intelligent function is identical to
>>experience allows one to insist that me following rules I don't understand 
>>creates a conscious entity,while me following rules I do understand does not.

>Huh?  If you are Searle in a room, consciousness occurs when you execute
>an algorithm which implements consciousness, as described above-- whether
>you understand the rules or not.  Searle (speaking now of the professor, not
>the processor) has confused things enormously by describing a computer
>containing a human processor; it leads to a foolish confusion between the 
>consciousness (or intelligence or understanding) or the processor and that of 
>the system.  Real computers all have certifiably stupid, unconscious CPUs.

Forgive me, I had thought I was bravely fighting a rearguard action
in support of dualism against a materialist zeitgeist. Your response
confused me, since it is clearly not a materialist response. Rereading
Searle's Scientific American article, I see that he points this out, 
though he considers dualism a fatal flaw and I do not.

To propose that consciousness arises from manipulation of symbols has
other problems than just that it separates phenomena into mental and physical
classes, a separation which I find quite congenial, while most contemporary
scientifically literate people don't. The problem is one of circular
reasoning: how can consciousness arise from manipulation of symbols when
symbols arise from consciousness? Symbols are not symbols until a
consciousness apttaches a meaning to them. This "systems argument" puts
Descartes before the horse. :-) (thanks, I've always wanted to say that)

This is where I come to the confusion of intelligence and consciousness.
Clearly, AI has existed ever since a program defeated its programmer
at chess. However, the attachment of meaning to the symbols output by
the program was accomplished by an external conscious entity, i.e., the
human player. Owing to the equivalence (I hope I understand this correctly)
between NP complete problems, if we had no knowledge of the rules of chess
and attempted to determine what the machine was doing from the microcode
level description of the algorithm, we could not derive the rules of the
game. Hence, the machine can no more be said to "understand" chess than
the travelling salesman problem, etc. It is only by the application of
the (as yet mysterious) consciousness of the human participant that any
meaning is attached to the symbols.

How can consciousness arise from symbols when symbols cannot exist
without consciousness?

>>>>Now, it is proposed that successful implementation of a system whose
>>>>_design objective_ is to _pass our intuitive tests_, (dressed up with 
>>>>Turing's name to give it a certain official credibility) is indeed 
>>>>conscious. That is to say, the assumption is that our intuition is 
>>>>infallible.

>>>By no means.  The assumption behind the test is that *no better test* of
>>>consciousness is presently available.  Turing's original proposal involved
>>>repeated test iterations compared statistically, which assumes fallibility,
>>>not infallibility, of our intuitions.  And the purpose of AI is to develop
>>>artificial intelligence, not to pass the Turing Test.

I have no problem with AI, only with the assertions like: that artificial
consciousness is known to be possible, that consciousness is known to
be an emergent property of symbol manipulation, that consciousness is
known to be impossible without intelligence, that the domain of science
is known to be commensurate with all the phenomena of the universe, etc. etc.

Such assertions are based on faith in science and intuitive beliefs about
the meaning of science, not on the methods of science. That is, they are
not science, but scientism.

The test is assumed to be infallible in that our intuition is assumed to
be incapable of systematic false positives. In other areas, e.g., optical
illusions, it is known that systematic false positives are possible. And
since passing the Turing Test is certainly _a_ goal, if not _the_ goal,
of some AI workers, the possibility of systematic false positives must
be seriously considered. It is my belief that a Turing Test passing
algorithm is likely in the near future, but that is only because we
are capable of being sytematically tricked, not because we have a handle
on the nature of consciousness.

>>>Drew McDermott posted an interesting article about this last year,
>>>in which he pointed out the Turing Test is only a stopgap in the absence
>>>of a real theory of intelligence, and that to actually construct an AI
>>>will require working out such a theory-- and once we have one we can, with
>>>much relief, throw out the Turing Test and use the theory instead.

>>I cannot see how such a theory can be verified, even if valid. Would
>>you use the Turing Test? How can you verify any objective theory of
>>subjective consciousness? 

>It's hard to say without having the theory in hand, isn't it?  But to verify
>any theory you look for testable consequences, and test them.  These need 
>not be exclusively external; the successful theory should explain not only
>human behavior but human neurology.

I certainly wish you luck on this enormous undertaking, and I hope I am
not so attached to my ideas as to fail to aknowledge your success should
you achieve it. Just the same, I'm not holding my breath.

If you await such a result before assigning rights to AI, though, our
differences are theoretical rather than policy differences. This is much
different than proposing that passing the Turing Test is sufficient
evidence of consciousness to assign rights to AI realizations or AI
algorithms. (Those who disagree: which would it be, btw?)

>>In the absence of a clear theory of consciousness, I think this attitude is
>>the single most dangerous idea I have ever heard of. 

>If this is the most dangerous idea you can think of, I think you need
>to broaden your reading a bit.  You might start with Stanislaw Lem's
>_Cyberiad_, which might reconcile you to an all-robotic world... 

I'm not above using science fiction as an illustration; it was me who
brought up Star Trek. But I wouldn't use it as proof of anything. Lem
in particular is not writing as a prognosticator! And there are plenty
of examples in SF of AI gone amok.

BTW, the very first (imho) SF story ever written was a cautionary tale about
scientific hubris and artificial life. Hint: the author's husband was
a famous romantic poet.

mt



