Newsgroups: sci.cognitive,bionet.neuroscience,comp.ai.philosophy
From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!news.sprintlink.net!demon!chatham.demon.co.uk!ohgs
Subject: Re: Mind Models
References: <35s56r$t3t@portal.gmu.edu> <3655uj$led@zip.eecs.umich.edu> <780567809snz@chatham.demon.co.uk> <3671ld$nmg@portal.gmu.edu>
Organization: Royal Institute of International Affairs
Reply-To: ohgs@chatham.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.27
Lines: 14
Date: Thu, 6 Oct 1994 15:32:10 +0000
Message-ID: <781457530snz@chatham.demon.co.uk>
Sender: usenet@demon.co.uk
Xref: glinda.oz.cs.cmu.edu sci.cognitive:5290 comp.ai.philosophy:20823

I am amazed that simple neural nets can learn boolean logic quite effectively.
I have been playing with one over the weekend and it managed to handle the
equivalent of quite a deep array of NAND, AND and OR gates. This could
in principle, offer the thresholds and tapers against which other nets could
learn a fuzzy approach. The issue of objective function (ultimately, what
are these things trying to emulate) can be seen as an arbitrary falling into
one of a limited number of poles, followed by a genetic-algorithm like pruning 
of networks which fail to establish enough client or descendant groups. A lack 
of reinforcement at a somewhat higher level of abstraction than a Hebbian net,
however. 
_________________________________________________

  Oliver Sparrow
  ohgs@chatham.demon.co.uk
