From newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!cs.utexas.edu!rutgers!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc Thu Jul  9 16:20:18 EDT 1992
Article 6402 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!cs.utexas.edu!rutgers!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc
>From: nlc@media.mit.edu (Nick Cassimatis)
Newsgroups: comp.ai.philosophy
Subject: Re: Defining other intelligence out of existence
Message-ID: <1992Jul1.044930.8970@news.media.mit.edu>
Date: 1 Jul 92 04:49:30 GMT
References: <1992Jun30.193051.28317@sequent.com>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 30

In article <1992Jun30.193051.28317@sequent.com> bfish@sequent.com (Brett Fishburne) writes:
>I have followed all kinds of discussions lately both here and on other
>news groups which talk about methods of evaluating artificial
>(or just plain non-human) intelligence.  What I have taken away from these
>discussions is a clear impression that the philosophical community seems
>to be at a loss to define/evaluate intelligence independent of being
>human.  This may seem trivial (or obvious), but, IMHO, it is an important
>observation which deserves some review.

It's nontrivial, but frighteningly obvious.  Much of the talk here
would vanish into incoherence or tautology as soon as precise
definitions were introduced.  This is more than a waste of time -- it
is dangerous, for it stifles one thought and makes one needlessly
pessimistic.  At this stage in AIs development, I think that spending
time on definitionsis really not worth a great deal of effort.  We
should be getting things to do things like speak, plan, etc., whether
we call them smart or not. 

>
>The Turing Test is an excellent case and point.  The computer is not
>considered to be intelligent until it is virtually indistinct from a human.
>It seems to me, if you are interested in producing a human, this is a valid
>test.  If, however, you are interested in producing *intelligence*, this
>might be considered overkill.

In a recent lecture, Chomsky pointed out that we don't judge the
attempts at artificial flights by trying to convince people that a jet
is an eagle.   While I think it would be wonderful to achieve artificial human intelligence (we would get alot of insigts into ourselves), I don't see why we should limit ourselves to this.

-Nick


