From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!sol.acs.unt.edu!mips.mitek.com!spssig!markrose Tue Jan 28 12:17:31 EST 1992
Article 3129 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!sol.acs.unt.edu!mips.mitek.com!spssig!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Table-lookup Chinese speaker
Message-ID: <1992Jan24.185620.41411@spss.com>
Date: 24 Jan 92 18:56:20 GMT
References: <1992Jan21.170056.23347@oracorp.com> <1992Jan22.205804.39265@spss.com> <1992Jan23.220442.24200@aisb.ed.ac.uk>
Organization: SPSS, Inc.
Lines: 44
Nntp-Posting-Host: spssrs7.spss.com

In article <1992Jan23.220442.24200@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>This "tuned" Turing test would fail some humans.  (It's really the
>same test, but we've gotten better at interpreting the results.)  But
>it would not fail most hamans and maybe it could be such as to fail
>all computers.
>
>So would you say that therefore computers don't have intentionality,
>understanding, etc?  Why?  Just because they're like some subset of
>the humans rather than covering the same range?  Or would you say
>that we should look for evidence other than the Turing test?

Look beyond the Turing test, of course.  Specifically, we should look at
the program's algorithm and see why it works or why it almost works.

An analogy can be made with chess-playing computers.  
Today computers can play chess, but they don't play it the same way 
people do, and I think most people's intuitions would be that the ways
people play chess (matching patterns, applying abstract rules) is more
"intelligent" than the way computers do (using brute force to choose the
best from among zillions of potential game sequences).  I don't think
this analysis would have been very clear thirty years ago, before
we had tried to build chess-playing computers.

I suspect that once we have a computer that passes the Turing test,
we will find our intuitions similarly clarified-- we'll either say "Yes,
that's intelligence," or "No, that's not quite it; now we see that what
we really want is _this._"  (To put it another way, we'll throw out the
Turing test and replace it with something better.)

As for algorithms which pass the test in a defective fashion, we should
simply recognize that the Turing test will not always result in a clear yes
or no; sometimes it will say "I don't know," or "Passes, but suspiciously,"
or "Fails, but interestingly."

As a simple example of the first case, consider the conversation:

   Tester:  Hello.
   Entity:  Hi.  Sorry, I'm busy right now and I can't talk.  See you later.

More interesting cases would be where the subset of humanity the machine is
like is a class of mental patients.  We might not like to say that the
humans involved fail the Turing test.  On the other hand I don't think we
could simply assert that the machine passes the test.  At the very least
there remains an asterisk in the record book.


