Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!newsxfer.itd.umich.edu!ncar!asuvax!chnews!ornews.intel.com!news.jf.intel.com!psinntp!psinntp!scylla!daryl
From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <1994Dec8.000925.27355@oracorp.com>
Organization: Odyssey Research Associates, Inc.
Date: Thu, 8 Dec 1994 00:09:25 GMT
Lines: 113

markrose@spss.com (Mark Rosenfelder) writes:

>>As for the time
>>limit, I think you can eliminate the time limit by saying that an AI
>>program fails the test if there *exists* a line of questioning (of
>>whatever length) that will convince the interrogator that the program
>>is not human.
>
>But this is not a test!

Okay, so call it the Turing Criterion, if you like. (or Daryl's Criterion)

>How does one discover this line of questioning?

By trial and error, or else by cheating (looking at the code).
Actually, looking at the code makes the 

>This is like saying that a student fails a physics exam if there exists
>a physics problem he cannot solve.  It sounds nice, but what is the 
>actual test?  The student might solve any of hundreds (or even an infinite
>number) of actual problems.  This criterion is not a test, it's just a 
>restatement of the condition a test is looking for.

The comparison with the physics exam is quite apt. There is no such
thing as a test that proves that someone knows physics, but it is
possible to prove that someone doesn't know physics. In the same way,
there is no way to prove that a being is intelligent, but it is
possible to prove that it isn't (at least in some areas). I don't see
this as a flaw in the Turing Test, since almost every test has the
same problem---a program cannot be tested for correctness, a bridge
cannot be tested for sturdiness, a car cannot be tested for
reliability. Certain hypotheses are intrinsically not verifiable,
they are only falsifiable.

>>I am
>>willing to bet that there is not a single person on this newsgroup who
>>would not come to accept a program as intelligent and conscious if the
>>program were capable of carrying on a lively, insightful discussion
>>about politics, morality, love, family and artificial intelligence.
>
>That may be true; but then Searle doesn't read this newsgroup...
>(On the other hand Harnad does, when he wants to joust with pygmies,
>and since he accepts Searle's argument his claim is that he wouldn't
>accept the TT passer as intelligent either.)

Searle and Harnad may say that, but I just don't believe them. I think
they would completely drop their whole line of argument the minute they
actually *met* a TT-passing program.

>It's a fascinating question, how people would really react to intelligent
>TT passers.  I am going by what anti-AI writers claim they would do; 
>you're sure that faced with the real thing, their skepticism would vanish.
>That may be-- it's hard to believe that Searle has really tried to 
>picture to himself what passing the TT really means-- but this conclusion
>may be defeated by human prejudice.  Humans are ready enough to treat 
>other members of their species as less than human; why should we expect
>them to treat AIs any better?

People treat their fellow humans as non-persons in the abstract, but I
believe that experience cracks through prejudice. There have been
many, many examples of Nazi party members who fell in love with Jews,
or white racists who have black friends. Prejudice is *pre*
judging---that is, judging prior to actual experience.


>>>-- It's biased toward language use, rather than any other
>>>demonstration of intelligence: the ability to read a map, or fix a
>>>bicycle, or play a violin, for example.
>>
>>As I said, the TT is not really about intelligence in all generality,
>>it is about particularly those aspects of intelligence that are
>>revealed through what we call "personality". 

>Well, this is pretty much a restatement of my complaint, except that
>you're reifying those aspects of intelligence tested by the TT.

Those are the aspects of intelligence that are philosophically
interesting. It may be an extremely difficult task to teach a robot to
play violin or ride a bicycle, but those accomplishments are (in my
opinion) without philosophical interest. There is nobody (that I know
of) who is philosophically opposed to AI because they think that
machines can never ride bicycles (and the same for any other practical
demonstration of skill). People such as Penrose or Searle say that
*despite* any such accomplishments, they will not believe in machine
intelligence.

>>A TT-passing machine may
>>very well be a complete klutz at fixing a bike (it may not even have
>>any hands or eyes with which to do so). However, it's also the case
>>that there are people who are complete klutzes, and nobody denies
>>their "personhood" on that basis.
>
>True, but that's just a consequence of thinking of intelligence as one
>thing, linked almost exclusively to verbal behavior.  People with 
>*just* verbal intelligence, but below-average intelligence in other 
>areas (mechanical ability, artistic or musical skill, social interaction,
                                                       ^^^^^^^^^^^^^^^^^^
Actually, I think social interaction comes across pretty well in the
Turing Test. You can't test such things as table manners, but nowadays
most of our social interaction is verbal.

>etc.) are still called "intelligent" simply.  I don't see this as a 
>defense of the TT, but as a narrowness in the usual conception of
>intelligence.

As I said before, I think that it is only in this narrow conception
of intelligence that there is any philosophical controversy. The Turing
Test addresses exactly the contentious aspects of intelligence.

Daryl McCullough
ORA Corp.
Ithaca, NY

