Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: That wacky Turing test (was Penrose and Searle)
Message-ID: <D0Czwt.32D@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <D01LqA.I9q@cogsci.ed.ac.uk> <jqbD02vM6.B1@netcom.com> <D05q1q.It8@spss.com>
Date: Mon, 5 Dec 1994 22:31:40 GMT
Lines: 58

In article <D05q1q.It8@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <jqbD02vM6.B1@netcom.com>, Jim Balter <jqb@netcom.com> wrote:
>(responding to Jeff Dalton)
>>Again you don't answer the question; why make the test harder than the one you
>>use for humans?  A program with the physical limitations of both Hawking and
>>Keller and the intellect of Quayle would still be impressive, would it not?
>>But you seem to want to look at its listing.  Why?  What will that tell you?
>>What is there to be found there that indicates intelligence or consciousness?
>>The variable names?  What if the program wins the Obfuscatory C contest?  Does
>>that affect whether it is conscious?  We can look at programs to see whether
>>they do the right thing when we know what algorithm is necessary, but what
>>algorithm is necessary for consciousness? [etc]

>All these are good questions, but the skepticism shouldn't be directed
>only at those who want to look at the algorithm.

I want to look at everything that's relevant.  I think there are many
things it's too soon to exclude.  The algorithm seems like it's one of
the things it's too soon to exclude.  Perhaps (for instance) "Strong
AI" is correct in that running the right program is enough.  But that
doesn't mean any program that can generate tty-TT-passing behavior
is enough.  I'm surprised that this suggestion is so controversial.

>Why should the TT be any sort of test of "consciousness"?  It wasn't
>designed as one; it was proposed as a test of intelligence (more or less--
>see caveats in another thread).  How did it get expanded to consciousness?

Beats me, but people defend it as such.

>Surely the problem pointed out by this question, and by your questions above,
>is the vagueness of the notion of consciousness.  To use your programming
>analogy: is consciousness something like correct sorting, which can be
>black-box tested, or is it something like using a QuickSort rather than a
>heapsort, which can only be tested by looking inside the box?  If anyone
>can give us an adequate definition of consciousness, we can decide where
>to look for it.  Till then we can't say that looking inside the algorithm
>will discover it... or that running a Turing test will.

I agree.

>Now, I myself am inclined to say that a TT *can* check for consciousness...
>by asking about it.  We can ask the machine if it's conscious, what are
>its feelings, what kind of qualia it has, what its thoughts are.  Could
>it fake the answers somehow?  Maybe; but I suspect that it won't be possible
>to build a program capable of dealing with such questions that does not
>in fact have something tolerably close to the things we're looking for.

I agree with that too.

>This doesn't mean, however, that the TT is the *best* way to look for these
>things, or that you couldn't find them looking at the algorithm too.
>A really good theory of mind-- probably another necessity for
>actually building an AI-- should tell us what exactly what consciousness is
>and how to test for it.

There too.

-- jeff
