Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!udel!news.mathworks.com!yeshua.marcam.com!zip.eecs.umich.edu!caen!crl.dec.com!crl.dec.com!pa.dec.com!fisher.bio.uci.edu!rwa
From: rwa@fisher.bio.uci.edu ("Russell W. Anderson")
Message-ID: <199411082140.AA19392@pinus>
Subject: Feedforward Neural Net.
Date: Tue, 8 Nov 1994 13:40:54 -0800
X-Received: by usenet.pa.dec.com; id AA11706; Tue, 8 Nov 94 14:51:29 -0800
X-Received: by pobox1.pa.dec.com; id AA08856; Tue, 8 Nov 94 14:51:22 -0800
X-Received: from fisher.bio.uci.edu by inet-gw-3.pa.dec.com (5.65/10Aug94)
	id AA28135; Tue, 8 Nov 94 14:48:15 -0800
X-Received: from pinus by fisher with SMTP id AA12325
  (5.67a/IDA-1.5 for <comp.ai.neural-nets.usenet@decwrl.dec.com>); Tue, 8 Nov 1994 13:40:55 -0800
X-Received: by pinus id AA19392
  (5.67a/IDA-1.5 for comp.ai.neural-nets.usenet@decwrl.dec.com); Tue, 8 Nov 1994 13:40:54 -0800
X-Received: by NeXT.Mailer (1.100)
X-Received: by NeXT Mailer (1.100)
X-To: comp.ai.neural-nets.usenet@decwrl.dec.com
Lines: 78


Subject: Feedforward Neural Net.

Dear Ben,

I am posting this because direct email failed.

The chemotaxis algorithm (biased random-walk learning,
see references below)
can train a network with arbitrary activation functions.
Barto and Sutton's adaptive critic would work as well.

Russell W. Anderson
Dept. of Ecology and Evolutionary Biology
University of California
Irvine, CA 92717
Phone: (714) 824-7307
Fax: 714-824-2181
email: rwa@fisher.bio.uci.edu
   or  RWANDERS@uci.edu

----------------------------------------------------

OUTLINE OF ALGORITHM:
APPENDIX

The Chemotaxis Algorithm

     The 'chemotaxis training algorithm' consists of a biased
random-walk in weight space. One advantage to this training
method is that it does not require gradient calculations or detailed
error signals. It also allows for automatic adjustment of the
single learning parameter, which otherwise has to be found
empirically.

Description of the Algorithm

     The network is initialized with an an arbitrary set of weights,
wo, and performance E(w0) is evaluated.  A random vector Fw is
chosen from a multivariate Gaussian distribution with a zero mean
and a unit standard deviation.  This random vector is added to the
current weights to create a 'tenative' set of weights (wt):

          


where h is a stepsize parameter. Performance E(wt) is then
calculated for the tenative weights.  If the error of the new
configuration is lower than the original configuration, the tenative
changes in the weight vector are retained; otherwise, the system
reverts to its original configuration.  If a successful direction in
weight space is found, weight modifications continue along the
same random vector until progress ceases. A new random vector is
then chosen, and the process is repeated. Although this method
seems inefficeint, it converges on a solution at a rate comparable
to back-propagation. More details are  available in the cited
literature.


REFERENCES:

H.J. Bremermann and R.W. Anderson, "An alternative to back-
propagation. A simple rule of synaptic  modification for neural net
training and memory" U.C. Berkeley, Center for Pure and Applied
Mathematics (PAM-483) (1989).

H.J. Bremermann and R.W. Anderson, "How the brain adjusts
synapses - maybe",  Automated Reasoning: Essays in Honor of Woody
Bledsoe , Ed. R.S. Boyer, Chapter 6, pp. 119-147, Kluwer Academic
Publishers, New York (1991).

R.W. Anderson, "Random-walk learning: A neurobiological
correlate to trial-and-error",
Progress in Neural Networks,
Special volume on Biological Neural Networks, Volume
Editor, D. C. Tam, Series Editor, O.M. Omidvar, Ablex Pub. Corp.,
Norwood, NJ. (In press: 1994).

