From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:16:47 EST 1992
Article 3078 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Table-lookup Chinese speaker
Message-ID: <1992Jan23.220442.24200@aisb.ed.ac.uk>
Date: 23 Jan 92 22:04:42 GMT
References: <1992Jan21.170056.23347@oracorp.com> <1992Jan22.205804.39265@spss.com>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 79

In article <1992Jan22.205804.39265@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>For some reason I was assuming that the database had to store ALL sensible
>conversations.  This leads to numerous problems.  However, we are on different
>ground if the table stores, not all sensible replies to anything the
>tester says, but merely a few particularly good ones.  (It can't store
>just one response, or we would notice it making exactly the same responses
>in repeated runs, which be a failure of the Turing test.)

I think that's a good point.  If we're allowed repeated tests,
and they're suspiciously the same, then we'd be, well, suspicious.

But that's just what can happen with computers.  Reboot it,
ask it a question, and it always gives the same answer.  Or,
consider a bunch of computers, all with the same program.
They all give the same answers.

One way around this would be to have some "real quantum randomness".
But then they're not just running a program, and would we really
want to say that _randomness_ made a difference.

Another possibility might be that the exact timing of the question
mattered.  The computer isn't just waiting for input, it's thinking,
and so the answer will be different depending on just where it's
thoughts were when the question came in.  But we can control that
too, by setting breakpoints or something.

So there might be lots of odd behavior from computers, even if
they're not just table lookup machines.

On the other hand...

>Does this get us out of the water?  I don't think so.  I still think there's
>a strategy to defeat the table-lookup machine: concentrate on questions
>that rely on the present context.  (I have to admit that this strategy
>was suggested by reading Mikhail Zeleny's post.)
>
>For instance, ask the machine what today's date is.  Now, a reply with
>today's date in it is sensible, but should not be placed in the database, 
>because then it would be available if we run the machine tomorrow, too.
>When we are constructing the database we will have to limit the machine's
>responses to variations of "I don't know."  It will have to respond the
>same way to questions like "What city are we in?", "What's the big news in
>Washington this week?" and "What do you think about the Bulls this year?"
>
>An accumulation of such responses would cause the machine to fail the Turing
>test.  It's just too suspicious that all its statements, though reasonable
>in themselves, so punctiliously avoid all reference to the current context.

This point has been addressed in other messages, so I'll try to
say something different in this one.

What you might get is an evasive computer, but then there are
evasive people.

In discussions of the Turing Test, we're sometimes asked to
imagine that the computer's responses were indistinguishable
from a human's.  But which human's?  People aren't all alike, 
and so we might say there's a range of human responses and
the computer's have to fall in that range.

But what if the computers were all in one part of the range.
Maybe they were all very good at arithmetic and poor at knowing
who was president of the US, and so on 'till we have something
fairly narrow.  Or maybe it doesn't have to be that narrow.
Maybe we could still get pretty good at distinguishing computers
from humans even though all the machine responses were ones
a human might make.  

This "tuned" Turing test would fail some humans.  (It's really the
same test, but we've gotten better at interpreting the results.)  But
it would not fail most hamans and maybe it could be such as to fail
all computers.

So would you say that therefore computers don't have intentionality,
understanding, etc?  Why?  Just because they're like some subset of
the humans rather than covering the same range?  Or would you say
that we should look for evidence other than the Turing test?

-- jd


