From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!nstn.ns.ca!aunro!ukma!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aifh!bhw Tue Jan 21 09:26:44 EST 1992
Article 2841 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!nstn.ns.ca!aunro!ukma!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan17.165423.21455@aifh.ed.ac.uk>
Date: 17 Jan 92 16:54:23 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk> <1992Jan16.122937.23838@aifh.ed.ac.uk> <6000@skye.ed.ac.uk> <6001@skye.ed.ac.uk>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 42

In article <6001@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

[A rather lengthy story about blue boxes and presents that I'll assume 
readers have seen. Not a very convincing story, though it does
interestingly contain one part that illustrates one of the problems with
Searle's Chinese room:]

>Now, one day someone (call him "John Searle") comes along and offers
>an argument to the effect that an artificial box couldn't contain a
>present, even though it was blue.  He argues that _he_ could make
>an artificial box, and it wouldn't contain a present, because he
>doesn't know how to make presents.  He could follow all the steps
>that the machine follows, and he still wouldn't know.  This
>becomes known as the "Chinese Box argument".

Now how does he know that "following all the steps that the machine
follows" will not enable him to make a present? Perhaps he won't 'know'
how to make one; perhaps the machine doesn't 'know' either; but
following the same operations as the machine may result in a present
being in the box nevertheless. 

>[The moral of this story for Turing Testers is: a test might turn out
>to work, even though we don't have any good reason for supposing that
>it does.  It might turn out to work by accident (no one forgets to
>put a present in) or because of something we haven't yet discovered
>(the pseudo-random number generators line up).  The possibility
>that the test might turn out to work does not, of course, constitute
>a good reason for relying on it now.]

Are you saying that there is _no_ reason for supposing that the Turing
Test may turn out to work? That the idea should be abandoned entirely
because we are not 100 percent dead-certain that every single case of
something that passes the Turing Test will definitely have
'intelligence' in the sense that we use it for ourselves? That Turing
was equally likely to have suggested the test "can it walk and chew gum"
as an indication of intelligence, and if he had, AI researchers would be
pouring their efforts into gum-chewing robots? The reason for using the
test now is because it is an acceptable hypothesis that "anything that
could converse like a human must have understanding". Not a proven
hypothesis. An acceptable one. For now.

BW 


