From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!aisb!jeff Tue Jan 28 12:15:07 EST 1992
Article 2965 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!aisb!jeff
>From: jeff@aisb.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Table-lookup Chinese speaker
Message-ID: <1992Jan21.191924.18205@aisb.ed.ac.uk>
Date: 21 Jan 92 19:19:24 GMT
References: <1992Jan18.134742.4155@oracorp.com> <1992Jan20.182835.5307@spss.com>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Reply-To: jeff@aifh.ed.ac.uk (Jeff Dalton)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 47

In article <1992Jan20.182835.5307@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>The mere fact of possessing a database of successful conversations does NOT
>imply that the machine can itself pass the Turing test.  The basic problem
>is that the machine's responses can be constrained to lie in the set of
>successful conversations, but the human's cannot be.
>
>Let's call the set of possible (up to hundred-year) conversations S.
>Within S we enumerate as T the set of conversations we judge to have passed 
>Turing Test.  It does not matter what criteria we use-- we can be as cautious
>or as generous as we like.

Think of the table as working like this.  The computer can take the
sequence of sentences in the conversation so far and look up a
response that a human could reasonably make.  (In some cases, a
resonable response would be to stop the conversation, so longer
sequences with the same beginning needn't be stored.)

We further suppose that humans can pass the Turing Test.  So if
the computer can always make a response that a human might make
in the same situation, the computer should pass too.

Moreover, we can eliminate responses that would cause us to say
a human had failed the Turing Test.  If there is some conversation
that would leave a human with no response that would not be regarded
as failing the test, then either (1) humans could always avoid this
predicament by making a idfferent response earlier on, or (2) the
conversaton could be used to make humans fail the test.

If (1), the table could list the same better response that avoids
the fatal conversation.  And (2) violates the assumption that humans
fail the test.

>Now picture a conversation C which fails the Turing test (that is, it's in
>S but not in T).  We can represent C as s(0), s(1), s(2), ..., where these
>are particular statements.

As above, either the machine can avoid this sequence by saying
something different at s(1) or s(2) or whatever (in the same way
that a human would avoid it), or else humans would also fail the
test.

Your mistake may be in thinking in terms of conversations failing
the test, as if a machine (but not a human?) could be forced into
a conversation in which there was nothing the machine could say
next that would allow it to pass the test.

-- jd


