Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!EU.net!sun4nl!ruuinf!tijger.fys.ruu.nl!usenet
From: lodder@fys.ruu.nl (Arian Lodder)
Subject: Re: What is an Elman network?
X-Nntp-Posting-Host: ruuny5.fys.ruu.nl
Message-ID: <D3F7I6.1My@fys.ruu.nl>
Summary: Elman and Jordan Nets
Keywords: Neural Networks
Sender: usenet@fys.ruu.nl (News system Tijgertje)
Reply-To: lodder@fys.ruu.nl
Organization: Utrecht University
References: <3grdok$56n@portal.gmu.edu>
Date: Fri, 3 Feb 1995 10:51:42 GMT
Lines: 58

In article <3grdok$56n@portal.gmu.edu>, rkotaru@site.gmu.edu (Raj Kotaru (ECE 549)) writes:
> The MATLAB neural network toolbox supposedly has a built in function "elman"
> that can be used to train and simulate a recurrent neural
> network.  What kind of architecture does it possess?  Does it
> have feedback elements as well?  Does it uses conventional (static)
> backpropagation or some kind of dynamic backpropagation for weight
> adjustment?
> 
> Any anwsers to these questions will be greatly appreciated.

When you want to do classification of timedependent examples in
forecasting, partial recurrent networks are the ones you van use.

An example of these networks are Jordan Nets. This is basically a FF-net
but the output nodes are connected to extra inputnodes with fixed strength
(often one). Those contextnodes (extra inputnodes) are themselves recurrent
in that they are connected to one-self with connection strength (\lambda),
also fixed (so non-trainable)

What remains to be trained is simply a FF-net and this can be done by
several algorithms (also standard-BP).

An Elman-net is a modification of a Jordan-net, where the feed-back loop is not
comming from the outputnodes, but from a hidden node. Also the feed-back strength
are fixed, so you have only to train the feed-forward net.

see also:
. <Elman  90>   J.L. Elman: Finding Structure in Time, Cognitive Science, Vol. 14
                pp. 179-211, 1990.
. <Jordan 86a>  M.I. Jordan: Attractor dynamics and paralellism in connectionist
                sequential machine, in Proc. of the 8th annual Conference of the 
                Congnitive Science Society, pp. 531-546, Erlbaum, Hillsdale, NJ,
                1986.
. <Jordan 86b>  M.I. Jordan: Serial Order: A Parallel Distributed Processing
                Approach. Techical Note Nr. 8604, Institute of Cognitive
                Science, Univ. of California, San Diego, La Jolla, CA, 1986.
    (also in:)  M.I. Jordan, D.E. Rummelhart (eds.) Advances in Connectionist
                Theory: Speech, Erlbaum, Hillsdale, 1989    

Hope this will help you

Arian.

--
_______________________________________________________________________

       _/_/   _/_/_/   _/_/_/   _/_/   _/    _/ A.W. Lodder
    _/    _/ _/    _/   _/   _/    _/ _/_/  _/ A.W.Lodder@fys.ruu.nl
   _/_/_/_/ _/_/_/_/   _/   _/_/_/_/ _/  _/_/ phone:
  _/    _/ _/  _/     _/   _/    _/ _/    _/       +31 (0)30 - 53 2955
 _/    _/ _/    _/ _/_/_/ _/    _/ _/    _/ fax  : +31 (0)30 - 53 7555

University of Utrecht, Department of Computer Topics in Physics 
Princetonplein 5 / P.O. Box 80000, 3508 TA Utrecht, The Netherlands
_______________________________________________________________________



