Newsgroups: comp.ai,comp.ai.philosophy,alt.cyberspace,comp.ai.nat-lang
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!swrinde!pipex!uknet!comlab.ox.ac.uk!newsserv!pr
From: pr@comlab.ox.ac.uk (Paul Rudin)
Subject: Re: Are there non-humans lurking on Internet/Usenet?
Message-ID: <PR.95Feb6160853@perlite.comlab.ox.ac.uk>
In-reply-to: billf@osi.ncsl.nist.gov's message of 2 Feb 1995 18:36:12 GMT
Organization: Oxford University Computing Lab
References: <mtm4.568.01C182F4@rsvl.unisys.com> <3gr8ms$n9o@dove.nist.gov>
Date: 06 Feb 1995 16:08:53 GMT
Lines: 30
Xref: glinda.oz.cs.cmu.edu comp.ai:27139 comp.ai.philosophy:25234 comp.ai.nat-lang:2781

>>>>> "Bill" == Bill Fisher <billf@osi.ncsl.nist.gov> writes:

    Bill> In article <mtm4.568.01C182F4@rsvl.unisys.com>, mtm4@rsvl.unisys.com (Mike McCormick) writes:
    >> BACKGROUND:
    >> 
    >> Indeed, a holy grail of AI for decades has been to write a program which can 
    >> pass the famous "Turing Test".  To pass, the program must fool a human 
    >> being talking with it into believing they are conversing with another real 
    >> person. ...

    Bill>   This has always seemed like a kinda dumb test to me, because you could
    Bill> probably get around it pretty easily with a cheating strategy.  Just
    Bill> have your program pretend to be a non-native speaker and simulate
    Bill> transmission problems:

    Bill>  Q: Marty thinks the Turing test is duck soup, what do you think?
    Bill>  A: Your are having to excuse me, please.  I am from the France.
    Bill>     There is so much static on the line that I don't hear you.
    Bill>     Did you say something about a buck being in your souP?

As far as I recall the original formulation of the test involved
having one questioner and two "questionees". Both of the latter trying
to convince the questioner that they were indeed the real
person. Adopting the strategy described would probably result in the
questioner choosing the other participant.

In any case I think an astute questioner could easily distinguish
between some fairly straight-forward Eliza type program generating
answers like the example above and an intelligent non-native speaker.
 
