From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!think.com!mintaka.lcs.mit.edu!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc Mon Jun 15 16:05:03 EDT 1992
Article 6246 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!think.com!mintaka.lcs.mit.edu!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc
>From: nlc@media.mit.edu (Nick Cassimatis)
Subject: Re: Defining "intelligence"
Message-ID: <1992Jun13.202428.12450@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <1992Jun10.131608.23965@cs.ucf.edu> <1992Jun13.063902.2610@news.media.mit.edu> <BILL.92Jun13124510@ca3.nsma.arizona.edu>
Distribution: world ,local
Date: Sat, 13 Jun 1992 20:24:28 GMT
Lines: 43

In article <BILL.92Jun13124510@ca3.nsma.arizona.edu> bill@nsma.arizona.edu (Bill Skaggs) writes:
>
>When we are thinking about something like "intelligence", we cannot
>begin with a prescriptive definition, because we are trying to
>understand the meaning of the word as it is generally used by people.
>To prescribe a new meaning is simply to duck the problem.  This is
>what Jeff Dalton meant when he said "I won't play the definition
>game".  It is quite reasonable, though --- and even necessary -- to
>look for a descriptive definition in a situation like this; otherwise
>we can never be sure we're all talking about the same thing.
>
>To sum up, I believe the argument here is between people who are
>thinking about two different kinds of definition.
>
>	-- Bill

What do you [plural] consider a satisfactory "descriptive" definition?
Searle (to my knowledge) hasn't given one to consciousness and
intelligence, Dennett rerfuses to give one with an "if and only if" in
it, and the discussion here hasn't done so either.  The possiblility
that we could clearly explicate our notions of intelligence and
consciousness as they are now is clearly in question.  I don't think
that those notions as they stand are coherent.

So if you [plural, as always] want to establish some "limits" on AI or
"Artifical Consciousness" or Artifical Life for that matter, give us a
definition and call it whatever kind of definition you want.  Until
then, researchers in these fields should just get to work making
machines that can speak and plan and so forth.

The Chinese room doesn't even attempt to show that machine translation
is possible (in fact, it assumes that it is.)  It only tries to show
that "understanding" is impossible.  There would have been arguments
against robotics in the early 19th century that would have said, even
if you can make machines that look like humans, that talk like them,
that behave like them, and soi forth, you can't make a machine that
has "life".  Vitalism such as that is out of fashion now.  But it is
not really that different from the "intellectualism" of Searle and all
them.  If you use some wishy-washy notion such as "life" or
"intelligence", and then show that artificial life and/or intelligence
are impossible, then you haven't really done anything but proliferate
confusion and bastardize the language.



