From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!pacific.mps.ohio-state.edu!linac!uchinews!spssig!markrose Tue Nov 26 12:30:57 EST 1991
Article 1443 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!pacific.mps.ohio-state.edu!linac!uchinews!spssig!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Cheating on the Turing Test
Keywords: Turing
Message-ID: <1991Nov20.175648.29489@spss.com>
Date: 20 Nov 91 17:56:48 GMT
References: <11779@star.cs.vu.nl> <11785@star.cs.vu.nl> <5657@skye.ed.ac.uk>
Organization: SPSS, Inc.
Lines: 21
Nntp-Posting-Host: spssrs7.spss.com

In article <5657@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>If a computer came along that could pass the Turing Test,
>and do other interesting things, then I'd want to know how 
>it worked.  I'd want to decide whether it was a clever trick
>or whether it wasn't or whether I couldn't tell.

So would I.  The Turing Test has always seemed too easy to me, precisely
because humans are so easy to fool.
 
Perhaps we could consider some ways to cheat on the Turing Test, so we
can watch out for them?  For instance, here's a nice strategy, an
elaboration on that of ELIZA: Analyze the human's last statement.  
Rephrase it in different words, with minor variations (this shouldn't
be much harder than some existing AI projects).  Most people like to be
agreed with, so this should give them a very high opinion of the program. 

Another trick: If the human's last statement can't be analyzed, change the
subject.  Humans are allowed non sequiturs, aren't they?

Trick 3: Ask a lot of questions.  This flatters the human and minimizes 
the amount of sentence generation we have to do...


