From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mintaka.lcs.mit.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Thu Feb 20 15:21:02 EST 1992
Article 3761 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mintaka.lcs.mit.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Real-life Turing Test
Message-ID: <1992Feb14.185749.13573@oracorp.com>
Organization: ORA Corporation
Date: Fri, 14 Feb 1992 18:57:49 GMT

Mark Rosenfelder writes:

> For those who still think the Turing Test is a sufficient test for
> intelligence, there is food for thought in the results of the Loebner
> Prize Competition in Boston, in which ten judges Turing-tested six
> programs and two human beings.  A program called PC Therapist, created
> by Joseph Weintraub, was judged as human by _five out of ten_ judges.

Obviously, (to paraphrase Abraham Lincoln, I believe) you can fool some
of the people all of the time and you can fool all of the people some of
the time. But it takes real AI to fool all the people all the time.

I don't think that the Boston test was a serious attempt to see if a
computer program could converse as well as a human being. The fact
that five out of ten judges disagreed that the program was behaving
like a human implies that humans and computers *could* be
distinguished by behavioral tests (even if some judges were fooled).

Daryl McCullough
ORA Corp.
Ithaca, NY





