From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!usc!rpi!uwm.edu!ogicse!pdxgate!dehn!erich Mon Aug 24 15:41:11 EDT 1992
Article 6646 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!usc!rpi!uwm.edu!ogicse!pdxgate!dehn!erich
>From: erich@dehn.mth.pdx.edu (Erich Boleyn)
Newsgroups: comp.ai.philosophy
Subject: Re: Turing Test Myths
Message-ID: <erich.714208724@dehn>
Date: 19 Aug 92 07:18:44 GMT
Article-I.D.: dehn.erich.714208724
References: <1992Aug13.024527.2079@news.media.mit.edu> <BILL.92Aug13130725@ca3.nsma.arizona.edu> <1992Aug13.230220.23021@news.media.mit.edu> <1992Aug14.045834.23492@mp.cs.niu.edu>
Sender: news@pdxgate.UUCP
Lines: 77

rickert@mp.cs.niu.edu (Neil Rickert) writes:

>In article <1992Aug13.230220.23021@news.media.mit.edu>
>	minsky@media.mit.edu (Marvin Minsky) writes:
>>
>>Well, yes.  It seems to me that we use the word "intelligence" in
>>regard to mental performances that we admire.

>  As long as we view intelligence this way, our investigation of
>intelligence will be about as scientific as if we used ouija boards or
>tarot cards.  Physics started to make strides only after Galileo questioned
>common assumptions.  Astronomy took of after Copernicus questioned the
>obvious beliefs.  Likewise chemistry, biology, and probably many other
>fields.

   Assumptions are sometimes best questioned by reformulation of the
problem.  For example, in trying to analyze moral behavior, one of
the most successful (in terms of explanitory power) is when one totally
ignores the concept of "right" and "wrong", etc. and just uses survival-
based functional characterizations.  It also simplifies tremendously.
Interestingly enough, it doesn't shed much light back on the "right"
or "wrong" parts, at least not as originally formulated.

   The main hallmark of "scientific progress" is usually either careful
definition of terms or the adoption of a completely new vocabulary that
does not correspond with the old "philosophical" terms that people used
to describe the phenomena before.

   The biggest problem (IMHO) is that we are using terms that were never
*meant* for anything but vague comparisons between people...  for example,
(as Minsky suggested), "s/he is very intelligent", "s/he is not so smart",
etc. etc.  A kind of linearization is used for a phenomena that most of
us admit is so complex (and likely not linearly scaleable in any reasonable
way) as to almost defy categorization in the first place.  Then on top
of it all we are trying to fit these words onto the complex set of
concepts that each of us has, and (from the looks of the literature on
the subject) each person has a different partial fit.

   The mental baggage of thousands of years of evolved language used to
deal nearly exclusively with human social interactions is somewhat
hindering.  Sure, some of the intellectuals (including Freud) have
added onto that, but how much have been added since then?  The very
words "conciousness", "intelligence", (and others) are each over 100
years old and the use of them is flying in the face of the fact that
we know better.

   In a way, I tend to think that the first philosophical questions were
opened up years ago (call it the birth of AI), then, when many of those
questions were deemed not well answerable, there has been a gradual
shift to the more "methodic" approach in studying systems (systems
science), bringing in information from neuroscience and cognitive functional
analysis.  The newest generation of shifts is (again, IMHO) likely the
"Artificial Life" movement.  Each of the major observable stages
has moved farther away from using common terms and the questions that
they represent,	until we get to "Artificial Life", the purpose of
which is no longer to get "intelligence".  Instead, the concept is left
by the wayside as more and more "useful" behaviors are adopted.

   One could argue that "AL" produces robots and programs that are no
smarter than ants or beetles, and say that is not what the "AI"'ers
are interested in.  Still, it is one of the first serious steps...  and
clear definitions can and do exist with some rigor in the field.

   Don't get me wrong...  I'm a die-hard AI'er myself...  and would love
to work on real "intelligent" systems.  I do think that some very careful
"ontological groundwork", at the very least, is long overdue.  But then
I'm beating a dead horse on that one.  Or maybe there will be another
shift beyond "AL" ;-) ?  Hmm...  the study of collectives sure looks
interesting...

   Erich

--
             "I haven't lost my mind; I know exactly where it is."
    / --  Erich Stefan Boleyn  -- \       --=> *Mad Genius wanna-be* <=--
   { Honorary Grad. Student (Math) } Internet E-mail: <erich@dehn.mth.pdx.edu>
    \  Portland State University  /      WARNING: INTERESTED AND EXCITABLE


