From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!wupost!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:26:27 EST 1992
Article 2808 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!elroy.jpl.nasa.gov!usc!wupost!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <6001@skye.ed.ac.uk>
Date: 16 Jan 92 21:05:34 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk> <1992Jan16.122937.23838@aifh.ed.ac.uk> <6000@skye.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 111

In article <6000@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>In article <1992Jan16.122937.23838@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>> [various things]

>I suspect that what you're getting at is that if I think conversation
>without understanding is impossible, then I should accept the Turing
>Test, because whenever there was conversation there would (in my view)
>have to be understanding.  Well, if I could _show_ that conversation
>was impossible without understanding, then I should indeed accept
>the Turing Test.  But I can't show it's impossible, and neither can
>the people who want us to accept the TT right now.

In any case, here's an example that may help to show how a test can
turn out to be sufficient, after certain facts are discovered, even
though there weren't good reasons for accepting it before.

Suppose there's a machine that makes boxes.  Sometimes it puts a
present in a box.  It also paints the boxes different colours.  We
can't yet open these boxes, and we don't know how the machine works.
We're not even sure it's putting in any presents.  Still, we can
invent a Blue Box Test: instead of asking "does the box contain
a present?" ask "is it blue?".

Some people think boxes can be blue without containing presents.
Other people think the opposite.  Maybe, in this culture, there's a
tradition of using blue boxes for presents.  So ordinarily someone
could reason that if a box is blue it contains a present, and the
question now arises whether the same is true for machine-made boxes.

(We could even imagine that man-made boxes are "natural boxes", that
the machine makes "artificial boxes", and that people are debating the
possibility of "artificial presents".)

However, even before the machine came along, no one could determine
whether all blue boxes contained presents, because they could open
only the boxes addressed to them.  The question of whether anyone was
justified in reaching the same conclusion for all boxes became known
as the "other presents problem".

Now, one day someone (call him "John Searle") comes along and offers
an argument to the effect that an artificial box couldn't contain a
present, even though it was blue.  He argues that _he_ could make
an artificial box, and it wouldn't contain a present, because he
doesn't know how to make presents.  He could follow all the steps
that the machine follows, and he still wouldn't know.  This
becomes known as the "Chinese Box argument".

Moreover, Searle argues that the Blue Box Test is wrong.  His example
shows, he says, that it would not work for machine-made boxes, because
such boxes cannot contain presents, even if blue.  Other people argue
that the Blue Box Test is right, and that we use it all the time as
a solution to the other presents problem.

However, Searle has never seen an artificial blue box (if necessary,
we can have him making his argument before the machine existed), and
in fact he thinks that artificial blue boxes are impossible.  (This is
not completely ridiculous.  Maybe it's very difficult to get a synthetic
dye or paint of the right colour, just as it was difficult historically
for purples.)

Even worse for Searle, Barbara Webb comes along to say that Searle
is committed to the possibility of artificial blue boxes, because his
argument presupposes it.

So..., here's what might happen.  There's isn't really a good reason
to believe the Blue Box Test.  Sure, we use it for natural boxes,
but the use of blue boxes for presents is just a cultural convention.
Someone _could_ make a mistake and fail to put a present in a blue
box.  Moreover, no one has investigated how the machine works, to
see if it always puts presents in blue boxes.  So we do not have a
good reason to rely on the BBT in the case of artificial boxes either.

However, it might turn out that the Blue Box Test would in fact work.
No one ever makes a mistake and omits a present, and, in the machine,
the pseudo-random number generator used for colours and the one used
for whether presents go into boxes happen to "line up" so that blue
boxes always have presents.  Someone might even be able to prove this,
by examining the RNGs in question; and, after they prove it, we
finally have a good reason to rely on the BBT for artificial boxes.

Searle's reputation plummets.  He was wrong about artificial presents.
The machine makes perfectly good presents, and everyone likes them.
He was also wrong about artificial blue.  The machine's blue is as
blue as anyone could wish.  And he was wrong about the Blue Box Test.
It works just fine, even though there's still no really good reason to
suppose that people (as opposed to the machine) will never forget
to put a present in.  There is great rejoicing, and the "other presents
problem" is forgotten.

Searle dreams of another world in which he was right.  Artificial
blue was impossible, and the machine couldn't make presents either.
Unfortunately, Barbara Webb appears in his dream to point out
that the Blue Box Test still works just fine.  All blue boxes
do contain presents!  Sure, the artificial boxes don't, but
then they're not blue!

So the next day Searle goes out and get some blue-box materials.
He gets a present too, and he assembles the materials into a box.
But he doesn't put the present in.  So in this world, the world
in which he was wrong on all the big questions, at least he's
right about the Blue Box Test.

[The moral of this story for Turing Testers is: a test might turn out
to work, even though we don't have any good reason for supposing that
it does.  It might turn out to work by accident (no one forgets to
put a present in) or because of something we haven't yet discovered
(the pseudo-random number generators line up).  The possibility
that the test might turn out to work does not, of course, constitute
a good reason for relying on it now.]

-- jd


