Monday 11 Oct 93, 3:00, WeH 4601 Explanation-Based Neural Network Learning in Chess Sebastian Thrun In the series of ``incredibly preliminary chats'' I will present ongoing research on learning evaluation functions in the domain of chess. Following the pioneering work by Boyan and Tesauro, who successfully applied temporal difference (TD) learning and Backpropagation to backgammon, I am exploring the usefulness of TD together with explanation-based neural network learning (EBNN) to the far more complex domain of chess. In TD, learning is driven exclusively by the final outcome of a game (win/loss/draw). The EBNN routine allows for explaining and analyzing whole games in terms of knowledge extracted from a large database of grandmaster games. I will present some concrete but yet prelimnary results, and discuss pitfalls, software design and complexity issues. The presentation will be informal. The research is being done in collaboration with Tom Mitchell, Hans Berliner and Horst Aursich.