From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Thu Feb 20 15:21:14 EST 1992
Article 3781 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Humongous table-lookup misapprehensions
Message-ID: <1992Feb16.184329.8680@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb13.073457.16647@a.cs.okstate.edu> <1992Feb13.201806.26828@psych.toronto.edu> <6189@skye.ed.ac.uk>
Date: Sun, 16 Feb 1992 18:43:29 GMT

In article <6189@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Feb13.201806.26828@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992Feb13.073457.16647@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>>>  Of course, one could attempt to redefine intelligence without thinking
>>>of it as particularly human.  If this be the case, and if we accept
>>>that such a table is intelligent, and if the research project is founded
>>>on understanding the mind by means of the table; the look-up table is
>>>uninteresting because
>>>all it has done is confirm a particular theory of "intelligence" and
>>>not a theory of "how-the-mind-works."  
>>
>>But many cognitive scientists see the purpose of their discipline not to
>>explain *human* intelligence, but intelligence *per se*.
>
>Don't you think there'd be at least something a bit odd about it
>if Cog Sci ended up explaining intelligence per se but still not
>human intelligence?

If we generate a general theory of intelligence which covers *all*
examples of intelligence then we *will* have explained human
intelligence.

>_AI_, on the other hand, can be content with creating some kind
>of intelligence, even if it has little to do with human intelligence.
>
>Moreover, I think it's reasonable to consider "intelligence" as
>a question of performance (ie, behavior).  I think that's the way
>the word is going, at least ("intelligent terminals", etc), and 
>if any of the words that get debated here are going to be defined
>in terms of behavior, I think that's the one to pick.
>
>This leaves plenty of room for Artificial Intelligence to go about
>the task of creating useful programs and machines of various sorts
>without having to win any of these big debates about intentionality
>and understanding.

But is it precisely "these big debates" which make the implications 
of AI programs different from those that do weather prediction.  If
you are only interested in "creating useful programms" then AI is
simply a branch of engineering.  This is fine, but then it must repudiate
its claim to produce minds.  For me, it is the "big debates" which make
AI interesting. 

>On the other hand, the recurring suggestion that, because "flying"
>has a more or less behavioral definition, all sorts of words like
>"understanding" and "intentionality" should have a behavioral
>definition too, looks like an attempt to take over all the words,
>and so to win the argument by removing all the vocabulary that
>we use to talk about the interesting cases.  Perhaps we will no
>longer think these words refer to anything interesting after
>Cog Sci completes its task; but I think we should wait until
>that happens.

Well, I for one would *not* advocate a purely behavioural definition
of "understanding" and "intentionality".  But then again, these things
differ from "flying" precisely because of their subjective component.

- michael



