Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!hookup!olivea!news.hal.COM!decwrl!netcomsv!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Bag the Turing Test
Message-ID: <jqbD0osB5.AMt@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <1994Dec7.153905.11481@oracorp.com> <3cg70i$6iq@bmerhc5e.bnr.ca>
Date: Mon, 12 Dec 1994 07:18:41 GMT
Lines: 28

In article <3cg70i$6iq@bmerhc5e.bnr.ca>,
Ian Woollard <wolfe@bmerhb17.bnr.ca> wrote:
>I forget the name of the program, but it basically replied in a
>whimsical way. The interrogator never quite got a straight answer to a
>question, so was unable to prove conclusively that it didn't know what
>the hell it was on about. The trouble is this: some people are like
>that(!)

Without some real incentive, nothing can be counted upon from the
participants.  However, if real incentives are offered, then it behooves
at least the human participants (this of course leaves wide open the question
of what is a real incentive to an AI) to try to convince the judges that
whimsy is not enough.  e.g., s/he can argue that, while she is clearly
human, the judges can't be too sure about those other participants.
(One lazy way to run a Turing Test is to ask "What's your best argument that
you are human and not an AI?" and then sit back).

>The outputs were therefore 'sensible' in a Turing Test sense.
>
>More than half of the interviewers thought it was a person, and I
>believe one person, who was fairly knowlegable about AI thought so
>too. ;-(

As with any scientific experiment, you need not accept the results.  If you
don't find the conditions of the test sufficiently rigorous, run your own test.
Just as Uri Geller can fool unwary physicists, an AI can fool unwary judges.
-- 
<J Q B>
