From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!mips!think.com!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:26:50 EST 1992
Article 2853 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!mips!think.com!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <6012@skye.ed.ac.uk>
Date: 17 Jan 92 22:14:37 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk> <1992Jan16.122937.23838@aifh.ed.ac.uk> <6000@skye.ed.ac.uk> <6001@skye.ed.ac.uk> <1992Jan17.165423.21455@aifh.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 79

In article <1992Jan17.165423.21455@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>In article <6001@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>[A rather lengthy story about blue boxes and presents that I'll assume 
>readers have seen. Not a very convincing story, though it does
>interestingly contain one part that illustrates one of the problems with
>Searle's Chinese room:]

Most of it wasn't an attempt to _convince_ you; it was just added
detail trying to be amusing.  Indeed, I could have left out everything
about Searle and still made the basic point I wanted to make.
(In particular, I didn't make the "Chinese Box" argument a real
parallel to the Chinese Room argument.)

Moreover, the whole thing should be read in the context of my
previous message (the one I made the blue box story a reply to).

>>[The moral of this story for Turing Testers is: a test might turn out
>>to work, even though we don't have any good reason for supposing that
>>it does.  It might turn out to work by accident (no one forgets to
>>put a present in) or because of something we haven't yet discovered
>>(the pseudo-random number generators line up).  The possibility
>>that the test might turn out to work does not, of course, constitute
>>a good reason for relying on it now.]
>
>Are you saying that there is _no_ reason for supposing that the Turing
>Test may turn out to work? 

No.  In what you quote above, I'm saying that a test might turn out
to work, even though we don't have any good reason for supposing
that it does.  Even silly tests can turn out to work.

We shouldn't employ a test because it might turn out to work.
We should employ it because we have good reasons (not, BTW,
100 percent dead-certain reasons) for thinking it does in 
fact work.

There should also be something that the test can do for us that
we want done.  We can't use the TT on conversational computers,
because there aren't any.  We can't use it to answer philosophical
problems about AI, because to assume that everything with the
right behavior had intentionality or consciousness or whatever
would be to assume too much.  In particular, it would be to assume
that it doesn't matter at all how the behavior is produced.
It wouldn't matter, for example, if a machine conversed by looking
up conversational threads in a vast table.

>That the idea should be abandoned entirely
>because we are not 100 percent dead-certain that every single case of
>something that passes the Turing Test will definitely have
>'intelligence' in the sense that we use it for ourselves?

No.  "Good reasons" does not mean "100 percent dead-certain".

>That Turing was equally likely to have suggested the test "can it
>walk and chew gum" as an indication of intelligence, and if he had,
>AI researchers would be pouring their efforts into gum-chewing
>robots?

That it's better than some other tests does not mean it's any
good.

>The reason for using the test now is because it is an acceptable
>hypothesis that "anything that could converse like a human must
>have understanding". Not a proven hypothesis. An acceptable one.

But why?  Why should we use the Turing Test?  There aren't any robots
running around that can pass it, so that we could use it to decide
whether or not they had intentionality (or understanding or consciousness
or whatever).  It can't be used to refute someone who questions
whether TT behavior always involves understanding, because then it
would be begging the question.  For the same reason, it can't be used
to show that, in the Chinese Room, "the system understands".

The TT just encourages the people who think we should be scientific
and who think anything that involves questions about experience is 
mystical nonsense.

-- jd


