Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!europa.eng.gtefsd.com!howland.reston.ans.net!cs.utexas.edu!news.unt.edu!hermes.oc.com!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <D0ELL3.9xt@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <1994Dec5.152724.10065@oracorp.com>
Date: Tue, 6 Dec 1994 19:17:25 GMT
Lines: 102

In article <1994Dec5.152724.10065@oracorp.com>,
Daryl McCullough <daryl@oracorp.com> wrote:
>markrose@spss.com (Mark Rosenfelder) writes:
>>-- Its definition is hopelessly vague; c.a.p posters have used the term
>>for anything from teletype exchanges limited to 5 minutes, to any kind
>>of external behavior, to any kind of physical observable whatsoever,
>>to unobservable phenomena as well (e.g. "thinking").  Such a wide range 
>>of denotations does not amount to a "test" in any useful sense.
>
>It isn't hopelessly vague. It is completely clear that Turing meant
>for the test to be conducted via teletype exchanges. 

True; but not everybody writing in this group follows this restriction.
I didn't make up any of the extensions to the meaning of the TT referred
to above; they all come from exchanges I've had on comp.ai.philo.

>As for the time
>limit, I think you can eliminate the time limit by saying that an AI
>program fails the test if there *exists* a line of questioning (of
>whatever length) that will convince the interrogator that the program
>is not human.

But this is not a test!  How does one discover this line of questioning?
This is like saying that a student fails a physics exam if there exists
a physics problem he cannot solve.  It sounds nice, but what is the 
actual test?  The student might solve any of hundreds (or even an infinite
number) of actual problems.  This criterion is not a test, it's just a 
restatement of the condition a test is looking for.

>>-- It hasn't achieved consensus; if a machine "passed the Turing Test"
>>the people who already believe a machine can't think would only say 
>>"who cares?" anyway.
>
>The people who say they wouldn't be convinced by a computer that
>passes the Turing Test are just not being honest with themselves.  It
>is one thing to give intellectual, Searle-style arguments as to why a
>hypothetical TT-passing program doesn't really understand, and it is
>quite another to actually *meet* such a program, and dismiss it. I am
>willing to bet that there is not a single person on this newsgroup who
>would not come to accept a program as intelligent and conscious if the
>program were capable of carrying on a lively, insightful discussion
>about politics, morality, love, family and artificial intelligence.

That may be true; but then Searle doesn't read this newsgroup...
(On the other hand Harnad does, when he wants to joust with pygmies,
and since he accepts Searle's argument his claim is that he wouldn't
accept the TT passer as intelligent either.)

It's a fascinating question, how people would really react to intelligent
TT passers.  I am going by what anti-AI writers claim they would do; 
you're sure that faced with the real thing, their skepticism would vanish.
That may be-- it's hard to believe that Searle has really tried to 
picture to himself what passing the TT really means-- but this conclusion
may be defeated by human prejudice.  Humans are ready enough to treat 
other members of their species as less than human; why should we expect
them to treat AIs any better?

>>-- It's biased toward language use, rather than any other
>>demonstration of intelligence: the ability to read a map, or fix a
>>bicycle, or play a violin, for example.
>
>As I said, the TT is not really about intelligence in all generality,
>it is about particularly those aspects of intelligence that are
>revealed through what we call "personality". 

Well, this is pretty much a restatement of my complaint, except that
you're reifying those aspects of intelligence tested by the TT.

>A TT-passing machine may
>very well be a complete klutz at fixing a bike (it may not even have
>any hands or eyes with which to do so). However, it's also the case
>that there are people who are complete klutzes, and nobody denies
>their "personhood" on that basis.

True, but that's just a consequence of thinking of intelligence as one
thing, linked almost exclusively to verbal behavior.  People with 
*just* verbal intelligence, but below-average intelligence in other 
areas (mechanical ability, artistic or musical skill, social interaction,
etc.) are still called "intelligent" simply.  I don't see this as a 
defense of the TT, but as a narrowness in the usual conception of
intelligence.

>Now, it is very likely that human intelligence developed for such
>practical purposes as manipulating tools, and that language use was
>just a lucky side-effect. So, these practical aspects of intelligence
>could very well be the kind of groundwork that true AI will depend on.
>I suspect that it probably is.

So do I.

>>-- Better definitions of intelligence exist; for instance, it can be 
>>analyzed as a combination of capacities to remember, to learn, to
>>reason, to use language, to create, to plan, to know a good deal about
>>the world, to execute everyday tasks.  
>
>It seems to me that the Turing Test incorporates all of those (except
>insofar as "executing everyday tasks" requires hands or something
>similar).

This particular objection was to the use of the TT as a definition of
intelligence, not to whether it can be used to check for various aspects
of intelligence.
