From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose Mon Aug 24 15:40:46 EDT 1992
Article 6613 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose
Newsgroups: comp.ai.philosophy
Subject: Re: Turing Test Myths
Message-ID: <1992Aug13.200027.20543@spss.com>
>From: markrose@spss.com (Mark Rosenfelder)
Date: Thu, 13 Aug 1992 20:00:27 GMT
Sender: news@spss.com (Net News Admin)
References: <BILL.92Aug12122254@ca3.nsma.arizona.edu> <1992Aug13.024527.2079@news.media.mit.edu> <BILL.92Aug13130725@ca3.nsma.arizona.edu>
Organization: SPSS Inc.
Lines: 16

In article <BILL.92Aug13130725@ca3.nsma.arizona.edu> bill@nsma.arizona.edu (Bill Skaggs) writes:
>Anybody working in Artificial Intelligence must have at least an
>implicit notion of what intelligence is.  Otherwise I could build a
>screwdriver, and say look, I've achieved artificial intelligence! --
>and there would be no grounds for complaint.
>
>Making these implicit notions explicit is a large part of what
>philosophy is about.  It's a dangerous job -- a bad definition can
>lead into nasty tangles of paradox -- but the alternative -- working
>entirely with unanalyzed notions -- is even worse. 

AI researchers are hardly leaving their notions of intelligence unanalyzed.
Arguably their opinions of what intelligence is are manifested in their
programs.  Perhaps it's because of this that Prof. Minsky shows a disdain
for definitions; they're shallow and unsatisfying things compared to
the mass of analysis that precedes and produces an AI program.


