From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!spool.mu.edu!mips!decwrl!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Feb 20 15:20:56 EST 1992
Article 3750 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!spool.mu.edu!mips!decwrl!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Humongous table-lookup misapprehensions
Message-ID: <6189@skye.ed.ac.uk>
Date: 14 Feb 92 14:22:52 GMT
References: <1992Feb12.145716.22305@ccu.umanitoba.ca> <1992Feb13.073457.16647@a.cs.okstate.edu> <1992Feb13.201806.26828@psych.toronto.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 46

In article <1992Feb13.201806.26828@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Feb13.073457.16647@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>>  Of course, one could attempt to redefine intelligence without thinking
>>of it as particularly human.  If this be the case, and if we accept
>>that such a table is intelligent, and if the research project is founded
>>on understanding the mind by means of the table; the look-up table is
>>uninteresting because
>>all it has done is confirm a particular theory of "intelligence" and
>>not a theory of "how-the-mind-works."  
>
>But many cognitive scientists see the purpose of their discipline not to
>explain *human* intelligence, but intelligence *per se*.

Don't you think there'd be at least something a bit odd about it
if Cog Sci ended up explaining intelligence per se but still not
human intelligence?

_AI_, on the other hand, can be content with creating some kind
of intelligence, even if it has little to do with human intelligence.

Moreover, I think it's reasonable to consider "intelligence" as
a question of performance (ie, behavior).  I think that's the way
the word is going, at least ("intelligent terminals", etc), and 
if any of the words that get debated here are going to be defined
in terms of behavior, I think that's the one to pick.

This leaves plenty of room for Artificial Intelligence to go about
the task of creating useful programs and machines of various sorts
without having to win any of these big debates about intentionality
and understanding.

On the other hand, the recurring suggestion that, because "flying"
has a more or less behavioral definition, all sorts of words like
"understanding" and "intentionality" should have a behavioral
definition too, looks like an attempt to take over all the words,
and so to win the argument by removing all the vocabulary that
we use to talk about the interesting cases.  Perhaps we will no
longer think these words refer to anything interesting after
Cog Sci completes its task; but I think we should wait until
that happens.

>                           (You don't rule out planes from
>the category of "flying things" merely because they don't flap their
>wings like birds.)

-- jd


