Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.psc.edu!hudson.lm.com!news.math.psu.edu!chi-news.cic.net!newsfeed.internetmci.com!in2.uu.net!pipeline!psinntp!psinntp!psinntp!megatest!news
From: Dave Jones <djones>
Subject: Step function activation
Content-Type: text/plain; charset=us-ascii
Message-ID: <DJq9qB.yp@Megatest.COM>
Sender: news@Megatest.COM (News Admin)
Nntp-Posting-Host: pluto
Content-Transfer-Encoding: 7bit
Organization: Megatest Corporation
References: <4aa3dv$nmq@news.onramp.net> <4ajlco$ep8@rzsun02.rrz.uni-hamburg.de>
Mime-Version: 1.0
Date: Sun, 17 Dec 1995 11:10:57 GMT
X-Mailer: Mozilla 1.1N (X11; I; SunOS 5.4 sun4m)
X-Url: news:4ajlco$ep8@rzsun02.rrz.uni-hamburg.de
Lines: 12

Okay, suppose for some reason you MUST use a step function
for the activation function in a multi-layer perceptron.
How do you train it? Genetic algorithms and simulated annealing
are obvious choices, but can you devise a goal-directed,
back-propogation kind of scheme that doesn't depend on
the convenience of gradients?

I think there may be a way.


     Dave

