Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.kei.com!hermes.oc.com!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: That wacky Turing test (was Penrose and Searle)
Message-ID: <D05q1q.It8@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <CzsHMy.B9n@gpu.utcc.utoronto.ca> <CzzuEu.F48@gpu.utcc.utoronto.ca> <D01LqA.I9q@cogsci.ed.ac.uk> <jqbD02vM6.B1@netcom.com>
Date: Fri, 2 Dec 1994 00:15:25 GMT
Lines: 77

In article <jqbD02vM6.B1@netcom.com>, Jim Balter <jqb@netcom.com> wrote:
(responding to Jeff Dalton)
>Again you don't answer the question; why make the test harder than the one you
>use for humans?  A program with the physical limitations of both Hawking and
>Keller and the intellect of Quayle would still be impressive, would it not?
>But you seem to want to look at its listing.  Why?  What will that tell you?
>What is there to be found there that indicates intelligence or consciousness?
>The variable names?  What if the program wins the Obfuscatory C contest?  Does
>that affect whether it is conscious?  We can look at programs to see whether
>they do the right thing when we know what algorithm is necessary, but what
>algorithm is necessary for consciousness?  That only works for very localized,
>pure problems.  After doing 26 years of systems programming, I have learned
>(over and over) that the final proof is in the pudding.  You also want to look
>for "internal dialog".  What is that?  Logging intermediate results to a file?
>Subvocalization?  (I'm sure the engineers can add it if you need it.)  From
>the speculations of Dennett and Hawkins, one might conclude that "internal
>dialog" is an unnecessary artifact of evolution, one of those things that a
>non-blind watchmaker never would have included.  Why do you want to require
>it?  What good is it?  What has it got to do with consciousness?  It may not
>even be nearly as universal in humans as you imagine.  I find that the more
>familiar I am with a subject, the more confident I am about it, the less such
>dialog occurs.  I usually think "what am I going to say next?" when I'm
>prevaricating.  Is that what you want from AI?

All these are good questions, but the skepticism shouldn't be directed
only at those who want to look at the algorithm.

Why should the TT be any sort of test of "consciousness"?  It wasn't
designed as one; it was proposed as a test of intelligence (more or less--
see caveats in another thread).  How did it get expanded to consciousness?

Surely the problem pointed out by this question, and by your questions above,
is the vagueness of the notion of consciousness.  To use your programming
analogy: is consciousness something like correct sorting, which can be
black-box tested, or is it something like using a QuickSort rather than a
heapsort, which can only be tested by looking inside the box?  If anyone
can give us an adequate definition of consciousness, we can decide where
to look for it.  Till then we can't say that looking inside the algorithm
will discover it... or that running a Turing test will.

Now, I myself am inclined to say that a TT *can* check for consciousness...
by asking about it.  We can ask the machine if it's conscious, what are
its feelings, what kind of qualia it has, what its thoughts are.  Could
it fake the answers somehow?  Maybe; but I suspect that it won't be possible
to build a program capable of dealing with such questions that does not
in fact have something tolerably close to the things we're looking for.

This doesn't mean, however, that the TT is the *best* way to look for these
things, or that you couldn't find them looking at the algorithm too.
A really good theory of mind-- probably another necessity for
actually building an AI-- should tell us what exactly what consciousness is
and how to test for it.

I don't have such a theory in hand, but I find it plausible that 
consciousness offers some evolutionary advantages.  I think of it as a 
monitoring process, one which can apply memory and judgment to physical and
social situations the organism finds itself in.  Such a process needs
access to sensory input and memory; but evolution, no fool, has not 
entrusted everything to it (we are not conscious of nor capable of controlling
everything that goes on in our brains or bodies), and has safeguarded it with 
some non-maskable interrupts (pain, hunger, etc.) to keep it on track.  

The advantage of having such a process (over mere sphexish subroutines)
should be clear.  I don't buy the argument that it could be an "unnecessary
artefact"; evolutionary accidents have histories too, and I haven't seen a
plausible candidate for a process of which consciousness is an accidental
by-product.

Now, this monitor process is rather arrogant, and likes to identify itself 
with the organism itself.  For all we know other brain processes are 
"conscious" too; but since we (the monitor processes) are the ones in control 
of language, the others don't get to protest.  (Or we just interpret their 
voices as "our thoughts".)

This way of getting on in the world may not be *necessary*; nothing evolution
does is necessary, but it gets the job done.  Other organisms may get
things done some other way-- insects, for instance.  
