12:00, Wed 9 Oct 1996, WeH 7220 An Introduction to the Kalman Filter and its Application to Neural Net Training Kan Deng In 1989, Singhal and Wu demonstrated that training a multilayer perceptron can be done using the extended Kalman filter technique. This new training algorithm significantly reduces the computational cost and memory requirements compared with the back-propagation method. Since then, some modifications have been made so as to reduce the computational complexity. In 1993, Puskorius and Feldkamp claimed that the Kalman filter is also powerful enough to train recurrent networks. In this talk, I will give a top level introduction to Kalman filter techniques. I will focus on how to transform neural net training into a Kalman filtering problem. Techniques to improve the computational efficiency will be summarized. Finally, I'd like to make a bold comment on the Kalman filter training approach, then propose a maximum likelihood neural net training method.