From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!news.acns.nwu.edu!news.ils.nwu.edu!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert Tue Jul 28 09:41:21 EDT 1992
Article 6447 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!news.acns.nwu.edu!news.ils.nwu.edu!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Subject: Re: Defining other intelligence out of existence
Message-ID: <1992Jul14.174442.16152@mp.cs.niu.edu>
Organization: Northern Illinois University
References: <BILL.92Jul13114604@cortex.nsma.arizona.edu> <1992Jul14.031930.3423@mp.cs.niu.edu> <BILL.92Jul14102805@ca3.nsma.arizona.edu>
Date: Tue, 14 Jul 1992 17:44:42 GMT
Lines: 63

In article <BILL.92Jul14102805@ca3.nsma.arizona.edu> bill@nsma.arizona.edu (Bill Skaggs) writes:
>rickert@mp.cs.niu.edu (Neil Rickert) writes:
>
>   Bill Skaggs writes:
>   >
>   >As a prototype, think of a dog standing in front of a fence, with a
>   >bone on the other side; twenty feet away the fence has a hole.  The
>   >problem is to reach that scrumptious bone; the initial state is the
>   >dog's current position, transitions are movements, and there is a
>   >single goal state, namely the bone's position.
>
>     Aren't you doing just what the subject line of this article is trying
>   to avoid - namely "defining other intelligence out of existence?" 
>
>My intention was to do exactly the opposite.  When the dog solves this
>problem, it is showing intelligence.  I don't understand why you say
>this is a homocentric view of intelligence -- it seems completely
>species-independent to me.  The only thing required to speak of the
>intelligence of an entity is that it have goals; the class of
>goal-possessing entities contains a lot more than just humans, doesn't
>it? 

  How can we be sure that the dog has goals, or that the dog sees this as
a problem?  Aren't we anthropomorphizing when we assume so?  For that
matter, how certain are we that human intelligence has much to do with
possessing goals?  Couldn't the goals often be no more than a
rationalization?

>     I would much rather look at intelligence as something which has evolved.
>   Thus I would measure intelligence in terms of a creature's ability to
>   adapt to a broad variety of circumstances, since surely this adaptability
>   is one of the forces in the evolution of intelligence.

>But you've weakened my argument in any case.  I was proposing "the
>ability to solve problems" as a *descriptive* definition of
>intelligence, meaning that I thought this was pretty close to the way
>most people usually use the word.

  When most people use the work "intelligence" they think of its as
being a primarily human phenomenon.  There is nothing wrong with sticking
to the common meaning, but if you do so, you are indeed coming quite
close to "defining other intelligence out of existence".

>                                   You use the word differently; if
>most people do, then I am wrong.

  I agree I use the word differently.  Our common use of the word is
very homocentric, and we will never understand intelligence unless we
broaden our horizons beyond this meaning.

>                     Situational intelligence is a set of strategies
>for dealing with particular kinds of problems.

  But the difficulty is that different intelligences might not even
agree on what is the problem.

>Intelligence, abstractly viewed, always involves search and pruning.

  I am not at all convinced of this.  Where does the "Aha! Insight"
effect, or serendipitous discovery, fit in here?  You can always
say that this is the result of an unconscious search, but that is
a pure rationalization.



