From newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!wupost!uunet!ogicse!pnl-oracle!duke!d3g637 Thu Jul  9 16:20:31 EDT 1992
Article 6422 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!wupost!uunet!ogicse!pnl-oracle!duke!d3g637
>From: d3g637@duke.oname (DP Chassin)
Newsgroups: comp.ai.philosophy
Subject: Re: Defining other intelligence out of existence
Message-ID: <1992Jul7.002937.27952@oracle.pnl.gov>
Date: 7 Jul 92 00:29:37 GMT
Article-I.D.: oracle.1992Jul7.002937.27952
References: <1992Jul1.044930.8970@news.media.mit.edu>
Sender: news@oracle.pnl.gov
Reply-To: d3g637@duke.oname
Organization: Sun Microsystems
Lines: 69

In article 28317@sequent.com, bfish@sequent.com (Brett Fishburne) writes:
>
>The Turing Test is an excellent case and point.  The computer is not
>considered to be intelligent until it is virtually indistinct from a human.
>It seems to me, if you are interested in producing a human, this is a valid
>test.  If, however, you are interested in producing *intelligence*, this
>might be considered overkill.  
>
>Is it fair to require that for something to be considered intelligent it 
>must mimic the _most_ intelligent thing we can think of?  Suppose we applied 
>that standard to running.  You can only be a runner if you can run as fast
>as a cheetah, oh, and, by the way, you must run on all fours.  I know this
>is a ludicrous example, but is it really much worse than what we are asking
>of artificial intelligence?
>
>Equally as intersting, why set this standard?  Could it possibly be that
>humans can not deal with the possibility that we are not unique in the
>universe?  Sounds like a certain stance attributed to most religions, not
>philosophical paradigms...
>

I agree with your reaction to the Turing Test and your sense of what the problem
is.  I've given the subject much thought and came to the conclusion that 
Turing's test was not so much a measure of intelligence as a measure of
human communicative ability.  However suppressing Turing's (non-trivial)
contribution does not change the problem.  I still don't have a satisfactory
answer while I desire one very much.

In article 8970@news.media.mit.edu, nlc@media.mit.edu (Nick Cassimatis) writes:
>In article <1992Jun30.193051.28317@sequent.com> bfish@sequent.com (Brett Fishburne) writes:
>>I have followed all kinds of discussions lately both here and on other
>>news groups which talk about methods of evaluating artificial
>>(or just plain non-human) intelligence.  What I have taken away from these
>>discussions is a clear impression that the philosophical community seems
>>to be at a loss to define/evaluate intelligence independent of being
>>human.  This may seem trivial (or obvious), but, IMHO, it is an important
>>observation which deserves some review.
>
>It's nontrivial, but frighteningly obvious.  Much of the talk here
>would vanish into incoherence or tautology as soon as precise
>definitions were introduced.  This is more than a waste of time -- it
>is dangerous, for it stifles one thought and makes one needlessly
>pessimistic.  At this stage in AIs development, I think that spending
>time on definitionsis really not worth a great deal of effort.  We
>should be getting things to do things like speak, plan, etc., whether
>we call them smart or not. 
>

Although to some degree you are justified in your concern I can't help but
feel that without some effort at defining intelligence I might not be
capable of recognizing more exotic/interesting examples, whether they be
natural, artificial, or even (dare I?) extra-terrestrial.  I have trouble
separating the outward evidence of intelligence such as speaking or planning
from the inward activity itself.  This, I think, is the essence of the problem
with the Turing Test.

I would like to pursue discussions on better defining intelligence. 


	David P. Chassin
	Research Scientist
	Building Systems Performance Group
	Battelle
	Pacific Northwest Laboratories
		MS K5-16
		2400 Stevens Drive
		Richland, WA  99352
	(509)375-4369
	dp_chassin@pnl.gov


