From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aisb!jeff Fri Jan 31 10:26:51 EST 1992
Article 3251 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Table-lookup Chinese speaker
Message-ID: <1992Jan29.020518.11271@aisb.ed.ac.uk>
Date: 29 Jan 92 02:05:18 GMT
References: <1992Jan22.205804.39265@spss.com> <1992Jan23.220442.24200@aisb.ed.ac.uk> <1992Jan24.185620.41411@spss.com>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 44

In article <1992Jan24.185620.41411@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Jan23.220442.24200@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>This "tuned" Turing test would fail some humans.  (It's really the
>>same test, but we've gotten better at interpreting the results.)  But
>>it would not fail most hamans and maybe it could be such as to fail
>>all computers.
>>
>>So would you say that therefore computers don't have intentionality,
>>understanding, etc?  Why?  Just because they're like some subset of
>>the humans rather than covering the same range?  Or would you say
>>that we should look for evidence other than the Turing test?
>
>Look beyond the Turing test, of course.  Specifically, we should look at
>the program's algorithm and see why it works or why it almost works.

This is a surprise.  I thought we might be doomed to disagree on
everything (as so often happens on the net), and here we agree on what
I think is one of the most important points.

>An analogy can be made with chess-playing computers.  
>Today computers can play chess, but they don't play it the same way 
>people do, and I think most people's intuitions would be that the ways
>people play chess (matching patterns, applying abstract rules) is more
>"intelligent" than the way computers do (using brute force to choose the
>best from among zillions of potential game sequences).  I don't think
>this analysis would have been very clear thirty years ago, before
>we had tried to build chess-playing computers.

Not only that, some Chess programs took a more "human" apprach
in the sense that they explicitly represented goals and plans
rather than trying for maximum search.  The effectiveness of
search was, I think, somewhat surprising.  There were even
programs that got better when some of the "smarts" was removed,
because they were then faster and could look further ahead.

>I suspect that once we have a computer that passes the Turing test,
>we will find our intuitions similarly clarified-- we'll either say "Yes,
>that's intelligence," or "No, that's not quite it; now we see that what
>we really want is _this._"  (To put it another way, we'll throw out the
>Turing test and replace it with something better.)

That too is very close to what I'd say.

-- jd


