From newshub.ccs.yorku.ca!torn!utcsri!rutgers!gatech!destroyer!uunet!sequent!muncher.sequent.com!bfish Tue Jul 28 09:41:31 EDT 1992
Article 6464 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!gatech!destroyer!uunet!sequent!muncher.sequent.com!bfish
>From: bfish@sequent.com (Brett Fishburne)
Newsgroups: comp.ai.philosophy
Subject: Re: Defining other intelligence out of existence
Message-ID: <1992Jul16.153824.23887@sequent.com>
Date: 16 Jul 92 15:38:24 GMT
References: <BrDw9t.8L1@brunel.ac.uk>
Sender: bfish@sequent.com
Followup-To: comp.ai.philosophy
Organization: Sequent Computer Systems Inc.
Lines: 40
Nntp-Posting-Host: sequent.sequent.com

In article <BrDw9t.8L1@brunel.ac.uk> Christopher.Carne@brunel.ac.uk (Christopher J Carne) writes:
>To me what is valuable about the Turing Test is that it does not
>ask (or give) absolute or essentialist definitions. Rather it is
>sensitive to the dynamics of the social and relational ways in which
>humans construct concepts. In effect the Turing Test gives a sociological
>view of intelligence rather than an (essentialist) philosophical or
>scientific one. Moreover, it is a test that is reflexively sensitive
>to the roles that the increasing capabilities of machines have in
>our attitudes towards intelligence. 
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I couldn't agree with you more.  The Turing Test allows those who would, the
opportunity to disregard serious advances in artificial intelligence.  The
overwhelming sociological view of intelligence is that it is unacceptable in
any form which is not *human*.  This is my complaint with the Turing Test.

>Definitions of intelligence may be useful in persuing our own
>disciplinary aims, but any definition will be the result will
>only make sense within our discursive practices and will remain
>socially constructed. Indeed I feel that the lack of any definition
>we can all agree on is a healthy feature, reflecting the wide 
>diversity of viewpoints and methodologies in AI. Calls for
>absolute definitions seem to refelct a lack of philosophical
>enterprise and a privelaging of cognitivism as the only available
>grounding for work in AI, leading to a narrowing of vision and 
>delimit the type of work that can be done in AI. 

This lack of a definition also leads to a serious ambiguity in what can
reasonably called AI.  While I understand and sympathize with the concern
that definitions of AI could unreasonably limit the scope of work which
is considered  AI, I still think that it is necessary to establish _some_
limits on what AI is.  In the end, noone should object to the limiting of
scope of what is *considered* AI as long as such a limiting does not
preclude AI.

-- Brett

bfish@sequent.com

Standard Disclaimer


