Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!news.unt.edu!hermes.oc.com!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <D021p7.681@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <1994Nov24.135351.25743@unix.brighton.ac.uk> <D00167.91w@spss.com> <3befha$8u5@mp.cs.niu.edu>
Date: Wed, 30 Nov 1994 00:36:41 GMT
Lines: 155

In article <3befha$8u5@mp.cs.niu.edu>, Neil Rickert <rickert@cs.niu.edu> wrote:
>In <D00167.91w@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>-- Its definition is hopelessly vague; c.a.p posters have used the term
>>for anything from teletype exchanges limited to 5 minutes, to any kind
>>of external behavior, to any kind of physical observable whatsoever,
>>to unobservable phenomena as well (e.g. "thinking").  Such a wide range 
>>of denotations does not amount to a "test" in any useful sense.
>
>Turing's definition was not hopelessly vague.  A limited TT test is,
>by Turing's definition, an oxymoron.  At best this is a reason that
>the TT should be reserved for use as a test, and should not be a
>topic of philosophy.

Turing's article begins by promising to consider the question "Can machines
think?", and spends most of its length answering possible objections why
they could not.  So I think he was in part committing some philosophy,
and his example has been followed.

If people stuck to Turing's definition we'd be better off; but they don't,
and even defend their extrapolations.  

>>-- It does nothing to guide AI research.  AI does not proceed by 
>>throwing up candidates for the Turing Test, but by attempting to solve
>>particular practical problems, or by emulating biological intelligence.
>>The TT thus does no good within the field of AI.
>
>Surely the particular practical problems are amongst those that could
>be part of a Turing test.

Not all of them; artificial vision and robotics are important parts of AI
not covered by the TT; and for that matter the TT, though it could be used
to test a chess-playing computer, is not an efficient way of doing so.

>>-- It hasn't achieved consensus; if a machine "passed the Turing Test"
>>the people who already believe a machine can't think would only say 
>>"who cares?" anyway.
>
>I think this objection misses the point.  The TT was not intended to
>prove that a machine can think.  It was intended as a significant
>achievement which can be tested.

Turing explicitly states that his "imitation game" is a replacement of the
original question (can machines think?), which he considers "meaningless".
So I agree that he didn't intend the TT to be a proof that machines can
think (he would have regarded such a claim as fatally undefined),
but as a reframing of the question in more tractable terms.  This may be
pretty much what you meant; but your formulation calls the achievement
merely "significant", while Turing seems to go further:

   "I believe that in about fifty years' time it will be possible to program
    computers... to make them play the imitation game so well that an average
    interrogator will not have more than 70 percent chance oc making the 
    right identification after five minutes of questioning....  I believe 
    that at the end of the century the use of words and general educated 
    opinion will have altered so much that one will be able to speak 
    of machines thinking without expecting to be contradicted."

So I think to Turing, passing the test would not simply be an interesting
milestone, but a reason to cease distinguishing between human and computer
"thought".

>>-- It's fatally subjective.  There is no demonstration that results
>>are reproducible, even with a single observer.
>
>I think this complaint is premature.  Since nothing has passed the TT
>there is no data on which to base a judgement of reproducibility.

We already have false positives: people who have thought that Eliza, or
the dubious entrants in the Loeb competition, were human beings.

You might reply that these were tests on limited subjects-- but that is as
much as to say that the same questioners could run the test again with
vastly different results, which was precisely my point above.

>>-- It's easy to fool.  Turing seemed to think that people will not on
>>the whole accept "intelligence" in machines.  On the contrary, many
>>people accept it all too readily, or even figure it's already been done.
>
>Again, this is premature.  There have not yet been any decent tests.

But in what way were the tests so far indecent?  That they were limited
in subject, or that the testers were not sufficiently probing or skeptical?
But what in the Turing Test specifically addresses such concerns, or
demands a given broadness of coverage or skill in questioning?

As Feynman said (more or less): "The important thing in science is not to
fool yourself.  And you are the easiest person to fool."  I am sure Turing
didn't *want* to be fooled; but one generally has to take extra precautions
to avoid being fooled, and he does not do.

>>-- Focussing on external behavior as it does, the TT encourages the notion
>>that only algorithmic structure, rather than any physical fact about 
>>human brains, produces intelligence.  That may be, but it should be a
>>matter for investigation, not an initial assumption.
>
>As you admitted, the TT was not proposed as a definition of intelligence.
>The TT measures what the TT measures.  Any science works on the
>principle of measuring only that which can be measured.  The test
>itself does not encourage false assumptions -- it is the bad
>philosophizing about the test which does that.

But the test itself proceeds from philosophical ideas.  Turing says he
wants to draw a "fairly sharp line between the physical and the intellectual
capacities of a man."  That's a philosophical statement about what is
important about human cognition, and the test as proposed encourages a
line of inquiry based on that valuation.  

Now, AI has never been restricted by Turing, and has learned the importance
of nonverbal tasks, and become more interested in the biological workings
of brains.  Surely no AI researcher today would restrict "intelligence" 
solely to the kinds of "intellectual capacities" Turing refers to in his
article: playing chess, addition, writing sonnets.   It is still interested
in these things, but not exclusively.  It should not make so much, then,
of an intellectual tool which shows the limitations of its historical context.

>>-- Better definitions of intelligence exist; for instance, it can be 
>>analyzed as a combination of capacities to remember, to learn, to
>>reason, to use language, to create, to plan, to know a good deal about
>>the world, to execute everyday tasks.  
>
>It is not at all clear that these "better definitions" are less
>susceptible to the problems you have listed.

Quite correct; but they at least recognize that intelligence may be 
a complex combination of qualities, rather than a single, mysterious,
unanalyzable property.  (I don't exaggerate; I've seen defenses of 
the TT on this forum which heatedly denied that intelligence could be divided 
up in any way, or that there could be any conceivable alternative to the TT.)

>>-- One of the central tasks of AI (and cognitive science in general) is to 
>>give us a complete theory of mind, which would include an explanation of 
>>intelligence and how to search for it.  Far from being necessary for AI,
>>the TT would be superseded by any successful AI (which would have to be
>>built on a theory of mind incorporating a far better explanation of
>>what intelligence is).
>
>I don't see how this can be an objection.  That would argue that
>Ptolemaic astronomy should not have been pursued, for it was to be
>replaced by Copernican astronomy.  But Ptolemaic astronomy collected
>the data which inspired Copernicus to propose his alternative
>theory.  It work with the TT were to result in better approaches
>which superseded the TT, I would count them as a measure of the
>success of the TT as a useful interim step.

Right; but in my view this has already happened.  AI has already moved
past its exclusive fascination with purely rational abilities.

>I think your posting really amounts to an objection to the use of the
>TT in the philosophy of AI.  In that, I agree with you.  It is my
>reading of Turing that he never proposed it for that role.  The test
>remains a natural and obvious pragmatic test of a significant
>achievement in AI, and retains its usefulness in that role.

I don't have any problem with that.  My objections are addressed rather
to those who want to take it as much more than that.
