In my thesis, I attempt to address the problem of planning and scheduling for multi-agent, uncertain domains with durative actions. This general problem remains a difficult challenge. Research that focuses on constructing plans or schedules (or policies) have produced promising results but in general have difficulty scaling. A second body of work uses expected models and deterministic assumptions, constructing flexible plans and schedules (and/or replanning frequently) to absorb deviations at execution time. This work, while in general much more scalable, ignores the potential gain that results from explciitly considering uncertainty.
My work adopts a composite approach, and attempts to couple the strengths of both of the above approaches. I assume as a starting point a deterministic scheduler that schedules and reschedules as necessary during execution. My main contribution is the act of layering an uncertainty analysis on top of the deterministic schedules it produces, taking advantage of the known uncertainty model while avoiding the computational overhead of probabilistic scheduling. Throughout execution, agents can use this analysis of its current deterministic schedule to identify probable weak points and strengthen them, ultimately leading to significantly higher reward achieved overall. I am also interested in adding probability-driven meta-level plan management functionality, with the goal of allowing agents to intelligently decide whether it is best to spend computational resources rescheduling, strengthening, neither or both. With these in place, I hope to be able to fully manage plans in a probabilistic way.