Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornell!travelers.mail.cornell.edu!news.kei.com!news.mathworks.com!news.alpha.net!uwm.edu!news.moneng.mei.com!howland.reston.ans.net!Germany.EU.net!netmbx.de!zrz.TU-Berlin.DE!cs.tu-berlin.de!fu-berlin.de!news.dfn.de!RRZ.Uni-Koeln.DE!news.rhrz.uni-bonn.de!news.uni-stuttgart.de!rz.uni-karlsruhe.de!stepsun.uni-kl.de!uklirb.informatik.uni-kl.de!schmid
From: schmid@informatik.uni-kl.de (Klaus Schmid)
Subject: Re: AI can't incorporate adaption
Message-ID: <1995Feb22.173321@informatik.uni-kl.de>
Sender: news@uklirb.informatik.uni-kl.de (Unix-News-System)
Nntp-Posting-Host: gentzen.informatik.uni-kl.de
Organization: University of Kaiserslautern
References: <1995Feb16.194733@informatik.uni-kl.de> <DBP.95Feb18082702@proof.csli.stanford.edu>
Date: Wed, 22 Feb 1995 16:33:21 GMT
Lines: 60

In article <DBP.95Feb18082702@proof.csli.stanford.edu>, dbp@csli.stanford.edu (David Barker-Plummer) writes:
|> In article <1995Feb16.194733@informatik.uni-kl.de> schmid@informatik.uni-kl.de (Klaus Schmid) writes:
|> 
|> > One reason to dismiss classical (say computational/symbolic/etc.) 
|> > AI models of cognition is the fact that they cannot incorporate
|> > adaptive processes as evolution, development, learning
|> > (all this is possible with biologically motivated models
|> > like neural networks).
|> 
|> KS> First a few questions regarding the words you use. What do you
|> KS> mean by a computational model -- In what sense are neural networks
|> KS> (especially artificial NN) NOT computational??  What do you mean
|> KS> with learning in this context? What can an artificial neural net
|> KS> do, that can't be done by a symbolic machine learning system?
|> 
|> KS> Indeed in the comparisons I know of, even (simple) machine
|> KS> learning algorithms like ID3 did clearly outperform Neural Nets
|> KS> (e.g. back- propagation nets) on the same task.
|> 
|> Could you post a citation in support of this claim, please?  
|> 
For, what is perhaps the most stricking evidence, I do not have a citation:
  In an informal meeting here at the University of Kaiserslautern some people
  reported about a project at their University (I think it was Dortmund).
  They implemented about a dozen machine learning and used them under identical 
  conditions on the same task (prediction of the direction stock market prices go).
On this task they used a back-propagation net, ID3, ID3 with pruning, etc. (a total 
of about a dozen algorithms - I do not recall them all at the moment). Among these
back-propagation was among the (5? -I'm recalling this from memory) 
worse algorithms rated by prediction quality.
However, along a different dimension it was leading: It needed about 100times as
much cpu-time for learning than the second-worst algorithm.
(And this, also the neural networks are usually regarded as a good candidate for
this kind of task.)

BUT, if you want a citation, here we go: Have a look at 

Gregory Piatetsky-Shapiro and William Frawley (eds.)
Knowledge Discovery in Databases
AAAI Press/The MIT Press, 1991

This is a collection of articles. All authors use "classical" machine learning 
algorithms (depending on what you define to be a "classical" algorithm).
However, some of them also used connectionist approaches on the same task, but they 
do not report any improvement.

At the moment I do not have the book at hand, and at the time I was going through
the articles I was not interested that much in NN, therefore I do not have the exact 
references (which articles did take NN into the comparison).


Bye 
	Klaus







