Untitled Jim Blythe - Weh1327 - 2pm 09/28/94 Classical AI planners have typically assumed complete knowledge of the world, complete certainty in the outcomes of actions and a world that is static apart from the actions of the planner. These assumptions have considerably limited the application of planning systems in the real world. A common response is to use a system that waits for failures when the plan is executed and re-plans dynamically, but this will not be able to handle situations where you must plan ahead for uncertainty (for example bringing an umbrella). I will talk about an implemented planning system that can build robust plans in the situations described above by reasoning about uncertainty at the time of plan generation. This doesn't rule out re-planning at execution time, but aims to reduce the need for it. The planner iterates between improving a plan and evaluating its probability of success. It builds plans using classical techniques and evaluates a Bayesian net generated automatically from the plan. This evaluation leads to a set of subgoals whose achievement by the symbolic planner will most improve the plan. The final plan contains conditional branches, so it is reactive. This is definitely a work-in-progress talk, so I'm hoping there'll be plenty of discussion, about the assumptions made so far as well as how to proceed from here.