Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uunet!psinntp!scylla!daryl
From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Loebner Prize $2000 and a medal
Message-ID: <1995May1.001453.22610@oracorp.com>
Organization: Odyssey Research Associates, Inc.
Date: Mon, 1 May 1995 00:14:53 GMT
Lines: 69


cmckin@mbnet.mb.ca (Christopher McKinstry) writes:
[daryl@oracorp.com (Daryl McCullough) wrote:]
>
>>"Clever hacks" always work by anticipating the line of questions that
>>the interviewer will ask, and having canned responses that work in
>
>sounds like the way my mind works... guess i'm a "clever hack".
>
>>those cases. Since the devil's advocate is allowed to look at the code
>>and documentation, he can choose a line of questioning that probes
>>exactly the weaknesses of the program. If, even knowing how the thing
>>works, he can't find a question to ask that the program fails to
>>answer intelligently, then the program is considered intelligent
>>(or at least not unintelligent).

>by trying to eliminate "clever hacks" you're corrupting the entire point of a 
>turing test. that is it doesn't matter how something is done, just that it 
>appears to be done beyond reasonable doubt.
>
>the brilliance of the turing test is that is dismisses the "how" of 
>intelligence as irrelevant.

I don't think you understood my proposal. The devil's advocate gets to
look at the code, but he is *not* allowed to make the judgement as to
whether the program is intelligent or not. The judge does *not* look
at the code. Therefore, the test is still completely behavioral. 

The point of allowing the devil's advocate to look at the code is
simply to speed up the testing process. He can cut straight to the
questions that separate the wheat from the chaff. Take the example of
Eliza, the psychiatric program. If you talk with Eliza long enough,
you will eventually discover that it is not intelligent, but it takes
a long time. If you have some idea of how it works, then you can
demonstrate purely behaviorally that Eliza is unintelligent.

This is no different from any other kind of software testing. If you
have a program that is supposed to compute square-roots, it might take
forever to find out that it doesn't work by random testing, but if you
look at the code, you can more easily figure out which inputs it gives
the wrong answers for. The fact that you looked at the code does *not*
mean that you have inserted a "how" requirement---the definition of being
correct is still that it outputs the square-root of any number it is
presented with.

The same is true with my modified Turing Test---the definition of
success is still that it responds to inputs as well as a typical
human. However, the answer to that purely behavioral question can
in many cases be found much more quickly if you can look at the code.

>and don't forget, intelligence is just a "clever hack" of 
>evolution. i'm sure someone with the code to your brain could find many things
>that would highlight your weaknesses (off hand i have no trouble 
>of thinking about a couple of places to start).

Same to you, buddy! Anyway, I don't disagree with any of that. The point of
the Turing Test is not to show that a machine is perfect, it is to show that
it is as intelligent as a human. God's advocate (the lawyer arguing that
his client is intelligent) can certainly argue that any mistakes the AI makes
are well within the range for normal human error.

>as a side matter... i don't think even if the devil's advocate had my code, 
>he code make any sense of it.

That's just because it is poorly documented.

Daryl McCullough
ORA Corp.
Ithaca, NY
