===================================================== Getting Reinforcement Learning to Work on Real Robots ===================================================== Bill Smart, MIT Programming robots is hard. The idea of having a robot learn how to behave, rather than it being explicitly told what to do, is appealing because makes the programmer's job easier. Unfortunately, getting robots to learn anything useful is also hard. Robots are forced to exist in a dynamic environment which lacks the safety, repeatability and carefully-controlled stochasticity of many simulator-based learning domains. Reinforcement learning techniques seem well-suited to addressing many of these problems. However, elements of the domain such as real-valued sensor inputs, lack of initial knowledge and the constant need to keep the robot safe cause many traditional algorithms to fail. In this talk we look at some of the problems associated with learning on real robots and how RL techniques can be used to address them. We describe how standard Q-learning techniques can be modified to work in the real robot domain, focusing specifically on value-function approximation and lack of initial knowledge. Finally, we will present experimental results from a system implemented on an RWI B21 robot that is capable of learning simple tasks in real time. Bio: Bill Smart is a graduate student working with Leslie Kaelbling at MIT. He does RL, function approximation, and mobile robotics.