3:00, 13 December, WeH 4601 Planning for Risk-Sensitive Agents Sven Koenig (Note: If you are into hard-core reinforcement learning, please substitute "reinforcement learning" for "probabilistic planning" when reading the abstract. Thanks.) Methods for planning in stochastic domains usually aim for finding cost minimal plans and therefore assume that the agent who executes the plan has a risk-neutral attitude. Although there are many situations where risk-sensitive behavior is more appropriate, researchers have largely ignored the question how to incorporate risk-sensitive attitudes into their planning mechanisms. Utility theory shows that it is rational to maximize expected utility, given that the agent accepts a few simple axioms. Thus, researchers might believe that one could allow for risk-sensitive attitudes by replacing all costs with their respective utilities (for an appropriate utility function). I will show that this is usually not the case and then introduce a general method that remedies this problem. Throughout the talk, I will use a simple blocks world problem as an example and show how the best plan changes the more risk-seeking the agent becomes.