From newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!ukma!oldham Wed Sep 16 21:21:59 EDT 1992
Article 6805 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!ukma!oldham
>From: oldham@ms.uky.edu (Joseph Oldham)
Subject: Re: Don't try to "define" intelligence
References: <1992Aug29.143021.8163@Princeton.EDU> <715493498@sheol.UUCP> 
    <1992Sep6.195000.3465@Princeton.EDU>
Message-ID: <1992Sep7.20955.19841@ms.uky.edu>
Date: Mon, 7 Sep 1992 06:09:55 GMT
Organization: University Of Kentucky, Dept. of Math Sciences
Lines: 25

Although the analogy to gravity seems weak to me Stevan Harnad seeems to
make a reasonable point.  However, my assumption would be that most of
us would think that trying to define intelligence is both useful and
necessary.  It is useful so long as we use our definition(s) to give
us focus on whatever more specific problem we're working on.  The danger
that Harnad seems to reasonably point out is that when you focus you may
unconsciously limit yourself.  To attempt to achieve Turing-indisinguisability
with absolutely no definition in mind seems impossible (not to mention
doomed to failure) to me.  I would argue anyone with such a goal has an
operative definition of intelligence -- consciously held or otherwise.
(And if it isn't conscious boy is that person in trouble!)

The gravity analogy seems weak because we all seem to agree on what gravity
does, so we know what we have a theory about.  On the other hand we do
not seem to agree so much wrt what intelligence does -- so how do we
have a theory?  Our "definition(s)" are at least an attenpt to specify
what we're talking about.

J.O.

-- 
Joseph D. Oldham
oldham@ms.uky.edu
oldham@UKMA.BITNET
home: 233 7614


