Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uunet!psinntp!scylla!daryl
From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <1994Dec5.152724.10065@oracorp.com>
Organization: Odyssey Research Associates, Inc.
Date: Mon, 5 Dec 1994 15:27:24 GMT
Lines: 135

markrose@spss.com (Mark Rosenfelder) writes:

>Top eleven reasons the Turing Test should be thrown out

>-- Its definition is hopelessly vague; c.a.p posters have used the term
>for anything from teletype exchanges limited to 5 minutes, to any kind
>of external behavior, to any kind of physical observable whatsoever,
>to unobservable phenomena as well (e.g. "thinking").  Such a wide range 
>of denotations does not amount to a "test" in any useful sense.

It isn't hopelessly vague. It is completely clear that Turing meant
for the test to be conducted via teletype exchanges. As for the time
limit, I think you can eliminate the time limit by saying that an AI
program fails the test if there *exists* a line of questioning (of
whatever length) that will convince the interrogator that the program
is not human.

>-- It does nothing to guide AI research.  AI does not proceed by 
>throwing up candidates for the Turing Test, but by attempting to solve
>particular practical problems, or by emulating biological intelligence.
>The TT thus does no good within the field of AI.

The Turing Test is not about AI in general, it is about a very
restricted area of AI, the creation of artificial minds, or perhaps
artificial personalities. Very little (if any) current AI research is
directed at meeting this goal, perhaps because it is too hard, or
because nobody is interested, or because it is believed that certain
groundwork (pattern recognition, problem solving, etc.) must be laid
ahead of time. One would only expect the Turing Test to come into play
when researchers are ready and willing to work on artificial
personalities.

>-- It hasn't achieved consensus; if a machine "passed the Turing Test"
>the people who already believe a machine can't think would only say 
>"who cares?" anyway.

The people who say they wouldn't be convinced by a computer that
passes the Turing Test are just not being honest with themselves.  It
is one thing to give intellectual, Searle-style arguments as to why a
hypothetical TT-passing program doesn't really understand, and it is
quite another to actually *meet* such a program, and dismiss it. I am
willing to bet that there is not a single person on this newsgroup who
would not come to accept a program as intelligent and conscious if the
program were capable of carrying on a lively, insightful discussion
about politics, morality, love, family and artificial intelligence.

>-- It's fatally subjective. There is no demonstration that results
>are reproducible, even with a single observer.

If you use the rule that a program fails the test if there *exists* a
line of questioning that can convince the interrogator that the
program is nonintelligent, then there is no possibility of the results
from a single observer being inconsistent. As far as multiple users,
it is true that there would be a subjective element. There would
always be "borderline" personalities that some people consider
intelligent, and other people do not. However, there would be a vast
area of agreement.

>-- It's easy to fool.

Not if you take my definition. The program doesn't pass simply because
the interrogator believes that it is intelligent, it passes because
there is *no* line of questioning that will convince the interrogator
that it is unintelligent.

>-- Focussing on external behavior as it does, the TT encourages the
>notion that only algorithmic structure, rather than any physical fact
>about human brains, produces intelligence. That may be, but it should
>be a matter for investigation, not an initial assumption.

I don't think the TT makes any particular assumption about *how*
intelligence is produced. Regardless of how intelligence is produced,
the net effect in a conversational setting is to set up an
input-output relation.

>-- It's biased toward language use, rather than any other
>demonstration of intelligence: the ability to read a map, or fix a
>bicycle, or play a violin, for example.

As I said, the TT is not really about intelligence in all generality,
it is about particularly those aspects of intelligence that are
revealed through what we call "personality". A TT-passing machine may
very well be a complete klutz at fixing a bike (it may not even have
any hands or eyes with which to do so). However, it's also the case
that there are people who are complete klutzes, and nobody denies
their "personhood" on that basis.

Now, it is very likely that human intelligence developed for such
practical purposes as manipulating tools, and that language use was
just a lucky side-effect. So, these practical aspects of intelligence
could very well be the kind of groundwork that true AI will depend on.
I suspect that it probably is.

>-- If it worked at all, it would be because humans have an ability
>to determine what is or is not "intelligence"; an ability which we
>should examine to see where it comes from and how it works, not 
>mindlessly take as an unanalyzable given.

I agree. But it seems to me that by looking at what programs do and
do not pass the test would be a very good way to find out more about
human intuitions about intelligence.

>-- Better definitions of intelligence exist; for instance, it can be 
>analyzed as a combination of capacities to remember, to learn, to
>reason, to use language, to create, to plan, to know a good deal about
>the world, to execute everyday tasks.  

It seems to me that the Turing Test incorporates all of those (except
insofar as "executing everyday tasks" requires hands or something
similar).

>-- One of the central tasks of AI (and cognitive science in general) is to 
>give us a complete theory of mind, which would include an explanation of 
>intelligence and how to search for it. Far from being necessary for AI,
>the TT would be superseded by any successful AI (which would have to be
>built on a theory of mind incorporating a far better explanation of
>what intelligence is).

Well, I'm not holding my breath. I think we will have intelligent
robots a *long* time before we have a good theory of mind (or at
least before there is any consensus about what is a good theory).

>-- Not the least use of any theory is the counter-arguments that are 
>raised against it, which are often useful for refining our understanding.

I agree with you there.

Daryl McCullough
ORA Corp.
Ithaca, NY





