From powella@images.cs.und.ac.za Tue Jul 5 17:23:58 EDT 1994 Article: 17610 of comp.ai.neural-nets Xref: glinda.oz.cs.cmu.edu comp.ai.neural-nets:17610 Path: honeydew.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!godot.cc.duq.edu!news.duke.edu!convex!cs.utexas.edu!howland.reston.ans.net!ee.und.ac.za!tplinfm From: powella@images.cs.und.ac.za (Alan Powell) Newsgroups: comp.ai.neural-nets Subject: RESPONSE: Recurrent NNs & temporal sequences Date: 5 Jul 1994 13:16:29 GMT Organization: Univ. Natal, Durban, S. Africa Lines: 190 Message-ID: <2vbmfe$52q@lucy.ee.und.ac.za> NNTP-Posting-Host: images.cs.und.ac.za X-Newsreader: TIN [version 1.2 PL1] A few responses from my recent request for info on recurrent neural nets (applied to temporal/sequential sequences). Any further responses will be summarised and posted. None of the respondents have asked to remain anon - therefore they aren't. Enjoy, Al. (powella@images.cs.und.ac.za) ================= From: back@elec.uq.oz.au (Andrew Back) Here are some references for papers on neural network methods for signal processing and time series applications. You may wish to see if you find any of these are of interest to you. I've put in a number of ours, and a selection of other ones which we've found interesting. Back, A.D., Tsoi, A.C. ``FIR and IIR synapses, a new neural network architecture for time series modelling''. {\sl Neural Computation}, Vol 3, No 3, pp 375 - 385, 1991. Back, A.D., Tsoi, A.C. ``An adaptive lattice algorithm for IIR multilayer perceptrons''. Neural Computation, Vol 4, No 6, pp 922-931, 1992. Back, A.D. and Tsoi, A.C. ``Nonlinear system identification using multilayer perceptrons with locally recurrent synaptic structure", Neural Networks for Signal Processing 2, Kung et. al. (Eds), Proc. IEEE Workshop, pp 444-453, 1992. Tsoi, A.C. and Back, A.D., ``Locally recurrent globally feedforward networks, a critical review of architectures", IEEE Trans. Neural Networks, Special Issue on Dynamic Recurrent Networks, June 1994. Frasconi, P. Gori, M. Soda, G. ``Local feedback multilayered networks''. Neural Computation, Vol 4, pp 120 -130, 1992. Gori, M., Soda, G. ``Temporal pattern recognition using EBPS''. EURASIP, 1990. Jordan, M.I. ``Supervised learning and systems with excess degrees of freedom''. Massachusetts Institute of Technology, COINS Technical Report 88-27, May, 1988. Mozer, M. ``A focussed back propagation algorithm for temporal pattern recognition''. Complex Systems, Vol 3, pp 349 - 381, 1989. Mozer, M. ``Neural net Architectures for temporal sequence processing''. to appear in Weigand, A. Gershenfeld, N. Eds. Predicting the future and understanding the past. Addison Wesley, Redwood City, Calif. 1993. Nerrand, O. Roussel-Ragot, P. Personnaz, L., Dreyfus, G. Marcos, S. ``Neural networks and nonlinear adaptive filtering: unifying concepts and new algorithms''. Neural Computation. Vol. 5, pp 165 - 199, 1993. Poddar, P, Unnikrishnan, K.P. ``Nonlinear prediction of speech signals using memory neuron networks''. Neural Networks for Signal Processing I. Ed Juang, B.H., Kung, S.Y., Kamm, C.A. IEEE Press, 1991. Poddar, P. Unnikrishnan, K.P. ``Memory neuron networks: A Prolegomenon''. General Motors Research Laboratories Report GMR-7493, October 21, 1991. Robinson, A.J. Dynamic Error Propagation Networks. Cambridge University Engineering Department, PhD Thesis, 1989. de Vries, B. Principe, J.C. ``A theory for neural networks with time delays''. Advances in Neural Information Processing Systems, 3}, R.P. Lippmann (Ed.), pp 162 - 168, 1991. de Vries, B. Principe, J. ``The Gamma Model -- a new neural model for temporal processing''. Neural Networks. Vol 5, No 4, pp 565 - 576, 1992. Wan, E. ``Temporal backpropagation for FIR neural networks''. Proc Int Joint Conf Neural Networks. San Diego, pp 575-580, June, 1990. ----------- From: Albert Nigrin The following paper is available for copying via ftp. It is located in at: palestrina.anderson-lab.american.edu in the file: pub/neural_nets/nigrin_temporal.ps.Z Using SONNET 1 to Segment Continuous Sequences of Items To appear in Judy Dayhoff (Ed.), Temporal Dynamics and Time-Varying Pattern Recognition. ABLEX publishing Corporation, NJ. Abstract -------- In order for an autonomous agent to operate in a real world environment, it must overcome at least three major problems. First, to allow the agent to operate in real-time, it must be able to respond to events at the pace with which they occur. Second, since real-world events usually have no predefined beginning, middle or ending, an agent must be able to form its own segmentations. And finally, since there is often no external teacher present to guide it, the agent must be able to learn its categories in an unsupervised fashion. This paper will examine these and other issues (and will also critique some alternate approaches for handling temporal patterns). It will design a real-time neural network that learns to segment a never ending stream of input items in an unsupervised fashion. For example, consider the sequence below which is repeatedly presented to the network. The letters are presented one at a time, with no breaks in the presentation. Therefore, after the last letter, Z, is presented, the first letter in the sequence, E, is immediately presented again. E A T B C D N O W F G H I E A T J K L M N O W P Q R E A T S U V N O W X Y Z Notice that the lists EAT and NOW are embedded within several different locations (contexts). Because of this, the network will learn to recognize EAT and NOW as significant patterns, and learn to segment the full list accordingly. Albert Nigrin Assistant Professor American University Department of Computer Science and Information Systems 4400 Massachusetts Ave. NW Washington DC 20016-8116 email: nigrin@american.edu (202) 885-3145 --------------- From: "Russell W. Anderson" In response to your query regarding recurrent neural networks. I applied the chemotaxis algorithm [1,2,3] to training recurrent networks to 1) integrate an input signal 2) oscillate at various frequencies 3) operate on an input string as a simple 2-state, finite state machine. The results appear in Chaper 5 of my Ph.D. thesis [4]. I never bothered to publish the results in a journal article, since training times were very long. The main advantage of the chemotaxis algorithm is that it does not require gradients to be calculated, and , as I have argued in [3], it is a biologically plausible learning rule. I also used the chemotaxis algorithm to train feed-forward, neural networks to produce the *parameters* of a time-signal, which were then used to control an unknown plant [5]. This might not be what you had in mind, however. You might also look for papers by M. Conrad, R. Smalz, L. Fogel and D. Fogel, who use evolutionary algorithms to train networks to perform sequential tasks. [1] H.J. Bremermann and R.W. Anderson, "An Alternative to Back- Propagation. A Simple Rule of Synaptic Modification for Neural Net Training and Memory" U.C. Berkeley, Center for Pure and Applied Mathematics (PAM-483) (1989). [2] H.J. Bremermann and R.W. Anderson, "How the Brain Adjusts Synapses - Maybe", Automated Reasoning: Essays in Honor of Woody Bledsoe , Ed. R.S. Boyer, Chapter 6, pp. 119-147, Kluwer Academic Publishers, New York (1991). [3] R.W. Anderson, "Random-Walk Learning: A Neurobiological Correlate to Trial-and-Error", Los Alamos National Laboratory Technical Document #LA-UR-93-0022 (In Press: Progress in Neural Networks 1994). [4] R.W. Anderson, Stochastic Optimization of Neural Networks and Implications for Biological Learning, Ph.D. Dissertation, University of California, San Francisco (1991). [5] R.W. Anderson and V. Vemuri, "Neural Networks Can Be Used for Open-Loop, Dynamic Control", Int'l. J. Neural Networks, Vol. 2(3):71-84 (Sept. 1992). Russell W. Anderson Dept. of Ecology and Evolutionary Biology University of California Irvine, CA 92717 Phone: (714) 856-7307 Fax: 714-725-2181 email: rwa@fisher.bio.uci.edu or RWANDERS@uci.edu