Monday 4/26/93 ; 3:00 ; WeH 7220 Sven Koenig -- Complexity Analysis of Reinforcement Learning In this talk, I will give a short overview of results about the complexity of real-time reinforcement learning in deterministic domains and how it can be decreased. In particular, I will discuss the influence of ... - the way the domain is represented - the initial Q-values (or U-values) - domain properties, for example: one can undo every action immediately ("no one-way streets") and will then focus on the influence of the reactivity of the algorithm, i.e. how much planning is done between action executions. I will also discuss what deterministic and probabilistic domains have in common and how they differ, thereby outlining future research. This is NOT a theory talk, although it is not empirical stuff either. It is meant to provide empirical researchers with guidelines on how to represent their domains, which domain properties to watch out for etc.