Message-ID: <32AC7C0A.20ED@parkcity.com>
Date: Mon, 09 Dec 1996 13:52:26 -0700
From: Richard Keene <rkeene@parkcity.com>
Organization: Park City Group
X-Mailer: Mozilla 2.0 (WinNT; U)
MIME-Version: 1.0
CC: rkeene@parkcity.com
Subject: Re: AI: Simulate or Duplicate Intelligence?
References: <3298E991.F3E@ix.netcom.com>
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Newsgroups: comp.ai
Lines: 166
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!scramble.lm.com!news.math.psu.edu!news3.cac.psu.edu!howland.erols.net!news-peer.gsl.net!news.gsl.net!uwm.edu!newsspool.doit.wisc.edu!news.doit.wisc.edu!news.itis.com!news.inc.net!novia!nntp2.rmci.net

John Frenster wrote:
> 
> In a 1988 publication, Hilary Putnam declared: "The notional task of
> artificial intelligence is to simulate intelligence, not to duplicate
> it."
>    Is this still true ?  Was it ever true ?
>       John
> --
> John Frenster                 Voice: 415/367-6483
> matrixcognition(TM)           FAX:   415/364-1773
> 247 Stockbridge Avenue        e-mail: matcog@ix.netcom.com
> Atherton, CA 94027-5446       CompuServe: 71077,2534
>           http://matrixcognition.com/INDEX.HTM
>    matrixcognition:  "Computer-Assisted Decision-Making"

One of the major problems in AI is that the goals are not well 
defined.  Of course the goals of many sciences are not well 
defined, yet they are usefull.

By my view, the goal of ai is to simulate intelligence in order 
to 1.) Understand what intelligence is, and 2.) To duplicate 
intelligence so it can be used as a tool.

Number 1 is a philisophical goal, and number 2 is an engineering 
problem.

The current trends in AI seem to be very philosophical.  Since 
the engineering aspects have failed so spectacularly in the last 
few years investors now shy away from it. (The engineering 
aspects have succeded greatly in creating new usefull algorithms 
that actual are making mony for investors, yet have failed 
misserably in making anything you or I would call intelligent.)
The engineering aspect of ai has a few basic algorithms that are 
now studied.
1.) Neural nets, which correlate input patterns to output 
patterns.
2.) Rule base systems and expert systems.  These do lots of 
if-then-else rules. This includes fuzzy logic.
3.) Image and text feature extraction.
4.) Symbolic processing.  Assign symbols and their relations.

All of these do have features that we associate with 
intelligence, yet none of them are intelligent.

I believe the problem with ai is that the wrong questions are 
being asked.  Intelligence was evolved in order to promote the 
survival of the organism.  If it was not a pro-survival feature, 
then it would not exist.  The way that intelligence is a pro 
survival feature is that it predicts the future state of the 
environment, and allows the organism to react to the future.  In 
simple brains the prediction might be a second or two into the 
future.  In complex brains the prediction might look hours into 
the future, and in the case of humans, centuries.  (In the case 
of short term predictions it is just the future state we are 
predicting.  If the prediction is long term and irtterative, 
then we call it planning.  The difference is only in depth of 
prediction, not in the fundamental process.)
For the future prediction to be usefull, there must be some 
mechanism to react to the future predicted state. 

An architecture consisting of a subsumptive neural system
that represents mappings of the environment, and a pattern 
recognizer should duplicate the cognitive process.
The subsumptive system has at the lowest level immediate
representations of the immediate environment, such as cold/hot, 
color, touch mappings, sound tone and tenor, and such.  These 
concrete mappings are then abstracted to high level mappings of 
the immediate environment.  The upper leves of the subsumptive 
system are more abstract than the lower levels.
The pattern recognizer then follows this algorithm:
1.) Match the current state of the entire subsumptive system 
agains all previous states.
2.) Use the best match to re-stimulate the subsumptive system to 
match the previous state.  The degree of re-stimulation is in 
accordance to the strength of pattern match.
3.) Go back to step one and itterate.

This algorithm will cascade the brain into a ballance between 
the strength of match to previous experience, and the current 
environmental state.  This will make the organism react to the 
now predicted future environmental state.

There will be a fuzzy boundary somewhere in the subsumptive 
system between where upper abstract levels are reacting to the 
immaged or re-stimulated state, and where the low level 
subsumptive mappings represent the actual immediate environment.

In a calm environment almost the whole subsumptive system can be 
in a re-stimulated and immiganry state.  A sudden strong input 
will cause the pattern matecher to bo overriden, and the actual 
environment will take over the subsumptive system.

There also needs to be a controll mechanism that inhibits the 
pattern recognizer when a bad future is predicted, so it can 
backtrack and try predicting a better future.  This is where 
emotions become inportant.  If the itterative prediction results 
in a bad out come then the negative emotions associated with the 
outcome cause the pattern recognizer to be inhibeted.  This 
brings the subsumptive system back to the current environment so 
a new predictive sequence can begin.  If the outcome is only 
slightly bad then the pattern recognizer is only partialy reset, 
and so a small backtrack is done.  Thus the system can choose 
the best future and act accordingly.

Such an algorithm is usefull in making war machines, predicting 
the stock market, controlling vehicles.  Such an algorithm is 
very difficult to apply to current ai engineering goals.  In 
fact, it seems that the only ai goal that I have ever seen that 
matches what natural intelligence is used for, is the Mars rover 
projects, and military self guided vehicles.  All the other ai 
engineering goals are trying to build the castle before putting 
a foundation under it. (eg. rule based systems.)

Projects to do self guided systems will only succeed to a 
certain extent because they lack the itterative process.  Thus 
the Mars rover needs to still be remotely piloted.

To actualy prove this theory in depth will require a rather 
large computer and a few years of research.  The results could 
make investors a boat load of money.

I currently have a PC (Win95) program that implements the above 
algorithm, but lacks the speed to realy do a deep proof.  I have 
gotten the first proof of this by predicting the future state of 
a set of whiskers to avoid obstacles.  It works.

	R. Keene


-- 
Richard Keene - Senior Systems Engineer
Box 5000
Park City, UT, 84060
USA

The ideas expressed herein ar the ideas of
the author an do not necessarily represent
the ideas or policies of Park City Group.

rkeene@parkcity.com

Work Phone: 801-645-2875

The Park City Group
"Flawless Operational Consistency at 
 Radically Reduced Cost"
