Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.alpha.net!uwm.edu!news.moneng.mei.com!howland.reston.ans.net!news.sprintlink.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <D0K5EA.CEv@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3bu0gs$fff@sun4.bham.ac.uk> <jqbD0DG73.4uu@netcom.com> <D0GFxv.5zL@gpu.utcc.utoronto.ca>
Date: Fri, 9 Dec 1994 19:13:21 GMT
Lines: 134

In article <D0GFxv.5zL@gpu.utcc.utoronto.ca> pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:

>Another problem, most often ignored by people pulling out of their sleeves
>the HLT example (see "merely" above) is a complexity of a search algorithm
>in this case. considering size and dimensionality of the database. Note 
>that it has also to include past history of the conversation and a decision 
>process which branch to take (this decision process would be a reflection of 
>a "personality"). Personally I do not see any guarantee that a program 
>utilizing HLT would be any simpler than a program generating the conversation.
>Regardless, I do agree that stress on "how" is a mistake. Hans Moravec
>argued this very convincingly in terms of optimization.

So far as I can tell, the HLT program could be very simple.

There's a very large tree.  The program has a pointer to a node
in the three.  That represents where it is in the conversation.
At a given node, there's a branch for each input that might arrive.  
The program finds the right branch and follows it.  If the case
is conversation-by-teletype, there could be a branch for each 
character in the character set.  Finding the right branch is
then trivial.  Some nodes in addition to their branches say
"output this: ...".

The complexity is all in the data.  And there it appears as size.

In any case, the emphasis on how is no mistake, although Aaron
Sloman and I may be the only people here who believe this.
Hans Moravecs arguments are entertaining speculation (e.g.
fictional characters as platonic entities that feel pain,
interpretation in which rocks are interpreting themselves
as poets) but do not amount to what I would call a convincing
case.  (The same is true of his optimization argument, though
it's less entertaining.)

Now, the case for "how" may not be coinvincing either.  But in
that case we say "we don't know", not "`how' is a mistake".

The TT defense is shaping up nicely.  How programs work is a
mistake (only their I/O matters?), and let's see...

  The whole point of the TT is to determine if the computer is human.

  The people who say they wouldn't be convinced by a computer that
  passes the Turing Test are just not being honest with themselves. 

  Humans are ready enough to treat other members of their species as
  less than human; why should we expect them to treat AIs any better?

  What might be a test for the achievement of the above-described
  goal-defintion of AI? One just pokes at machines; a mind can
  be communicated with. Hence some sort of conversational test
  seems appropriate.

  Everything I know about you and almost else on the net has been
  obtained solely by examining your texts.

  why make the test harder than the one you use for humans? 

  Searle doesn't understand what the Turing test is,
  or what it's supposed to prove.  It's when Searle uses the phrase "the
  Turing test for Chinese" that he betrays his limited understanding.

  Pardon me for being so thick, but if they have identical behavior,
  then how do you know which one is conscious? 

  It does not need to be useful in other ways.  Just having a test is
  purpose enough for having a TT.

  I think your posting really amounts to an objection to the use of
  the TT in the philosophy of AI.  In that, I agree with you.  It is
  my reading of Turing that he never proposed it for that role.  The
  test remains a natural and obvious pragmatic test of a significant
  achievement in AI, and retains its usefulness in that role.

  Maybe those who are anti-TT [in a manner of speaking] are trying to
  question the relativity of TT. I don't see much of a problem here
  that statistical testing could not solve.

  Since we do not know how to judge presence of consciousnes except by
  giving a TT, the above is meaningless.

  Passing a verbal TT is what is hard. And inability to pass it does
  not exclude consciousness, see Helen Keller. Passing it OTOH is
  basically how we decide about other people's consciousness.

  ... empty claims that TT is not enough without saying what more is
  required are suspect.

There's enough here to show that the TT defense takes several forms.
Sometimes, the TT is just said to be useful.  Sometimes, defenders
are just countering what they see as bad arguments against the TT.
(I count those who do that as defenders, but not if they also give
arguments against the TT or say such things as "that's a bad argument,
but it would be better if you..." or "although I agree with your
conclusion, your argument is totally bogus".)

But a frequent theme is this: we in effect use the TT to decide about
other humans, and it's just prejudice (or, as Harnad says, arbitrary)
to do anything different for machines.  Suggestions that anything
else (e.g. how it works -- Andrez, please note that I don't say
"how it looks") might matter are attacked.

This line is taken even by some of those, such as Harnad, who
also argue against the TT.  (Harnad thinks Searle's arguments
work for TT-passers, but not for TTT-passers.)

It's at least very rare for anyone on the "pro-AI" side to offer
arguments or other weapons aginst this line.  (So I end up siding
w/ people whose overall conclusions about AI I typically disagree
with.)

Other defenses of the TT often function to protect the line by
attacking it's critics.  

Note that the line does not involve showing how it is that producing
the behavior will also produce consciusness, understanding, or
whatever.  It's all at a higher level: we TT humans, machines 
should be the same; and there are frequent attepts to make out
that disagreement is due to prejudice.

BTW, agreeing with Mark Rosenfelder on the TT includes ageement
with the following:

  >Even if you granted it was a functional definition of intelligence, 
  >it would still not be a description of how to realize that function.  
  >Running a four-minute mile is a test of speed, but nothing in the test 
  >tells you how to run that fast.

  A better analogy with how the TT has been used would be if the
  four-minute mile were proposed as a definition of "athleticism",
  with scorn expressed toward anyone who asked for a better definition
  or suggested that better tests could be devised.

-- jd
