Message-ID: <32B03B1B.5703@parkcity.com>
Date: Thu, 12 Dec 1996 10:04:27 -0700
From: Richard Keene <rkeene@parkcity.com>
Organization: Park City Group
X-Mailer: Mozilla 2.0 (WinNT; U)
MIME-Version: 1.0
CC: rkeene@parkcity.com
Subject: Re: Why AI will eventually work
References: <329C4585.74A8E2E0@geocities.com>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Newsgroups: comp.ai,comp.ai.philosophy,comp.ai.neural-nets
Lines: 79
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!news.acsu.buffalo.edu!dsinc!spool.mu.edu!newspump.sol.net!mindspring!news.bbnplanet.com!cpk-news-hub1.bbnplanet.com!feed1.news.erols.com!worldnet.att.net!newsxfer2.itd.umich.edu!uunet!in2.uu.net!207.67.1.2!news.inc.net!novia!nntp2.rmci.net
Xref: glinda.oz.cs.cmu.edu comp.ai:42751 comp.ai.philosophy:49885 comp.ai.neural-nets:35151

My two bits,

For one entity to understand another entity they must have
a common ground of direct experience.  For example, people know 
what a chair is because all people can sit.  They have done it 
and it is a direct experience.  A computer program can analyze 
various aspects of chairs, yet it doesn't 'know' what a chair is.
If we (the AI comunity) are to make true intelligences (whatever 
intelligence means) then we must start with systems that partialy 
replicate some aspect of human experience so we have a common 
ground to start with.  If I make an AI that is totaly character 
IO based (Input is keyboard, output is text) then I can never 
explain to it what a chair is.  I can discuss with it concepts 
that relate to text.

What AI needs to get moving again is the following:

1.) Attack the same problems that evolution attacked to arrive at 
intelligent systems.  The problem is, how to survive in natural 
environments.

2.) Stop being so philisophical.  Lets do engineering, not 
philosophy.  The current tidal wave of discussions about 
conciousness and such simply highlight the lack of any real 
progress.  Much like religious debates of 'how many angles can 
dance on the head of a pin.'  I think it is not so important to 
define intelligence in a philosophical sense in order to arrive 
at ai.  People recognize intelligence in creatures, yet can't 
define it.

3.) Attemt to arrive at algorithms and design methods that are 
growable in a biological sense.  Biology tends to take very 
simple ideas and use massive parallel redundancy to arrive at 
effect.  Current computer programs are the exact oposite, they 
use very detailed logic and threads of control.  Any algorithm 
that is supposed to produce intelligence must be able to work 
with a random 50% of the system elements gone.  No program will 
work with a random half of the source code lines missing.  The 
design methodology must be incrementaly evolvable.  This is what 
evolution has done, and must be a feature so the engineering 
design can be done a bit at a time.

There is a problem with the above criterion, it results in ai 
constructs that act like real animals.  Such behavior it is not 
easily applicable to real world problems (to make money) until 
one reaches human levels of intelligence.  Makeing an 
intelligence equal to a lizard may not be all that usefull.  
About the only real application would be self guided vehicles 
such as rug cleaners, military killing machines, and exploration 
machines.

One might use ai to predict the stock market by creating an 
intelligence that 'lives' in a finantial world.  Classical 
problems such as medical diagnosis would be very difficult to 
have an ai do untill one gets to the human level of cognition.

I have an algorithm that solves the above criterion.  If your 
interested in the paper, just ask.

-- 
Richard Keene
Park City Group
Box 5000
Park City, Utah, USA
84060

phone: 801-645-2875
rkeene@parkcity.com

The opinions expressed herin do not represent the opinions of
Park City Group etc etc..
