Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!purdue!news.cs.indiana.edu!blank@cs.indiana.edu
From: "doug blank" <blank@cs.indiana.edu>
Subject: Re: What is an Elman network?
Message-ID: <1995Feb4.134456.4072@news.cs.indiana.edu>
Organization: Computer Science, Indiana University
References: <3grdok$56n@portal.gmu.edu>
Date: Sat, 4 Feb 1995 13:44:49 -0500
Lines: 30

In article <3grdok$56n@portal.gmu.edu>,
Raj Kotaru (ECE 549) <rkotaru@site.gmu.edu> wrote:

>The MATLAB neural network toolbox supposedly has a built in function "elman"
>that can be used to train and simulate a recurrent neural
>network.  What kind of architecture does it possess?  Does it
>have feedback elements as well?  Does it uses conventional (static)
>backpropagation or some kind of dynamic backpropagation for weight
>adjustment?

Elman calls his version of a recurrent network Simple Recurrent
Network or SRN. Many others just call it an Elman Network. The idea is
to copy the activations of the hidden layer after each time step to a
special bank of units on the input layer (usually called the
"context", or in Jordan's terms, the "plan units".) That's it. This
very simple trick turns a plain ol feed-forward back-prop network into
a reccurent version, without the expense of unwrapping the entire
sequence to do it the "proper" way. Although you are not guarenteed of
being able to learn sequences, it usually works out that you can.

See Elman's paper "Learning Structure in Time" for further details.

-doug blank


-- 
=====================================================================
blank@cs.indiana.edu                Douglas Blank, Indiana University
Computer Science                                    Cognitive Science
=====================================================================
