Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uunet!usc!howland.reston.ans.net!swrinde!ihnp4.ucsd.edu!munnari.oz.au!goanna.cs.rmit.edu.au!aggedor.rmit.EDU.AU!harbinger.cc.monash.edu.au!merlin!mel.dit.csiro.au!its.csiro.au!dmssyd.syd.dms.CSIRO.AU!metro!metro.ucc.su.OZ.AU!tomli_p
From: tomli_p@stanier.archsci.arch.su.edu.au (philip tomlinson)
Subject: Data vs Knowledge
Message-ID: <TOMLI_P.95Mar20204547@stanier.archsci.arch.su.edu.au>
Sender: news@ucc.su.OZ.AU
Nntp-Posting-Host: stanier.arch.su.edu.au
Reply-To: tomlinso@ozemail.com.au
Organization: Department of Architectural and Design Science, University of
	Sydney
Date: Mon, 20 Mar 1995 10:45:47 GMT
Lines: 33

Hello,

I've just been reading Volume 1 of Parallel Distributed Programming
which has gotten me thinking about learning methods for subsymbolic
methods. I've had a bit of experience with Reinforcement Learning, Neural
Networks and Genetic Algorythms.  

I generally think these methods are useful if you have heaps of data to learn
from and not a lot of knowledge to start with.  This type of learning
seems less "intelligent" than learning with a lot of knowledge and only a
little bit of data. Modifying an expert systems knowledge base seems
to be more the latter, one has lots of heuristic knowledge and only
a bit of data(1 missed classification or something).

I was wondering if anyone out there has something to say about the
work that has been done in getting these methods to learn with lots of
knowledge and little data.  Are we quite a ways from representing lots
of knowledge in these systems?  Is there a difference in learning

Please respond direct.
Rgds,
Phil

Philip Tomlinson
Masters Student in Design Computing
Sydney University
tomli_p@arch.su.edu.au

--
Philip Tomlinson
Masters Student in Design Computing
Sydney University
tomli_p@arch.su.edu.au
