next up previous
Next: 7 Experiments Up: 6 Complexity Analyses Previous: 6.2 Complexity of Finding

6.3 Scheduling Complexity

A local search planner (e.g. ASPEN, rabideau:00) does not backtrack, but the problem to be solved is the same, so one might expect that complexity advantages are the same as for the refinement planner. However, the search operations for the local search planner can be very different. A previous study of a technique called aggregation eliminates search inefficiencies at lower levels of detail in task hierarchies by operating on hierarchies as single tasks [Knight, Rabideau, Chien, 2000]. Thus, it is not immediately clear what additional improvements a scheduler could obtained using summary information. We will show that the improvements are significant, but first we must provide more background on aggregation.

Moving tasks is a central scheduling operation in iterative repair planners. A planner can more effectively schedule tasks by moving related groups of tasks to preserve constraints among them. Hierarchical task representations are a common way of representing these groups and their constraints. Aggregation involves moving a fully detailed abstract task hierarchy while preserving the temporal ordering constraints among the subtasks. Moving individual tasks independently of their parent, siblings, and subtasks is shown to be much less efficient [Knight, Rabideau, Chien, 2000]. Valid placements of the task hierarchy in the schedule are computed from the state and resource usage profiles for the hierarchy and for the other tasks in the context of the movement. A hierarchy's profile represents one instantiation of the decomposition and temporal ordering of the most abstract task in the hierarchy.

Figure 19: Schedule of $n$ task hierarchies each with $c$ constraints on $v$ variables

Consider a schedule of $n$ task hierarchies with a maximum branching factor $b$ expanded to a maximum depth of $d$ as shown in Figure 19. Suppose each hierarchy has $c$ constraints on each of $v$ variables (states or metric resources). To move a hierarchy of tasks using aggregation, the scheduler must compute valid intervals for each resource variable affected by the hierarchy.7 The scheduler then intersects these intervals to get valid placements for the abstract tasks and their children. The complexity of computing the set of valid intervals for a resource is $O(cC)$ where $c$ is the number of constraints (usages) an abstract task has with its children for the variable, and $C$ is the number of constraints of other tasks in the schedule on the variable [Knight, Rabideau, Chien, 2000]. With $n$ similar task hierarchies in the entire schedule, then $C=(n-1)c$, and the complexity of computing valid intervals is $O(nc^2)$. But this computation is done for each of $v$ resource variables (often constant for a domain), so moving a task will have a complexity of $O(vnc^2)$. The intersection of valid intervals across variables does not increase the complexity. Its complexity is $O(tnr)$ because there can be at most $nr$ valid intervals for each timeline; intersecting intervals for a pair of timelines is linear with the number of intervals; and only $t-1$ pairs of timelines need to be intersected to get the intersection of the set.

The summary information of an abstract task represents all of the constraints of its children, but if the children share constraints over the same resource, this information is collapsed into a single summary resource usage in the abstract task. Therefore, when moving an abstract task, the number of different constraints involved may be far fewer depending on the domain. If the scheduler is trying to place a summarized abstract task among other summarized tasks, the computation of valid placement intervals can be greatly reduced because the $c$ in $O(vnc^2)$ is smaller. We now consider the two extreme cases where constraints can be fully collapsed and where they cannot be collapsed at all.

In the case that all tasks in a hierarchy have constraints on the same variable, the number of constraints in a hierarchy is $O(b^d)$ for a hierarchy of depth $d$ and branching factor (number of child tasks per parent) $b$. In aggregation, where hierarchies are fully detailed first, this means that the complexity of moving a task is $O(vnb^{2d})$ because $c=O(b^d)$. Now consider using aggregation for moving a partially expanded hierarchy where the leaves are summarized abstract tasks. If all hierarchies in the schedule are decomposed to level $i$, there are $O(b^i)$ tasks in a hierarchy, each with one summarized constraint representing those of all of the yet undetailed subtasks beneath it for each constraint variable. So $c=O(b^i)$, and the complexity of moving the task is $O(vnb^{2i})$. Thus, moving an abstract task using summary information can be a factor of $O(b^{2(d-i)})$ times faster than for aggregation. Because the worst case number of conflicts increases with the number of plan steps (just as with the refinement planner), the worst case complexity of resolving conflicts based on the number of plan steps at level $i$ is $O(k^{b^i})$. Thus (as with refinement planning) using summary information can make speedups of $O(k^{b^{d-i}}b^{2(d-i)})$ when summary information fully collapses.

The other extreme is when all of the tasks place constraints on different variables. In this case, $c=1$ because any hierarchy can only have one constraint per variable. Fully detailed hierarchies contain $v=O(b^d)$ different variables, so the complexity of moving a task in this case is $O(nb^d)$. If moving a summarized abstract task where all tasks in the schedule are decomposed to level $i$, $v$ is the same because the abstract task summarizes all constraints for each subtask in the hierarchy beneath it, and each of those constraints are on different variables such that no constraints combine when summarized. Thus, the complexity for moving a partially expanded hierarchy is the same as for a fully expanded one. In this case, the number of conflicts also does not change with the depth of the hierarchy because the conflicts are always between pairs of the $n$ hierarchies. So, for this other extreme case, summary information does not reduce the complexity of scheduling and would only incur unnecessary overhead.

Other complexity analyses have shown that different forms of hierarchical problem solving, if they do not need to backtrack from lower to higher levels because there are no interacting subproblems, can reduce the size of the search space by an exponential factor [Korf, 1987,Knoblock, 1991]. A planner or scheduler using summary information can witness exponential improvements without this assumption. Backtracking across abstraction levels occurs within the planner/coordinator described in Section 5.1 when the current search state is $\neg MightSomeWay$ and another $or$ subplan on the same or higher level can be selected. We demonstrated that the search space grows doubly exponentially down the hierarchy because the number of plans grows exponentially, and resolving conflicts grows exponentially with the number of plans. Thus, as long as the planner or coordinator does not have to fully expand all abstract plans to the primitive level and summary information merges at higher levels, the search complexity is reduced at least by a factor of $k^{b^d-b^i}$ where $i$ is the level where the search completed, and $d$ is the depth of the hierarchy. yang:97 also suggests ways exponential speedups can be obtained when subplans interact based on hierarchy structure. Our speedups are complementary to these because summary information limits the decomposition of task hierarchies and compresses the information manipulated by a planner or scheduler.

next up previous
Next: 7 Experiments Up: 6 Complexity Analyses Previous: 6.2 Complexity of Finding
Bradley Clement 2006-12-29