6.3 Scheduling Complexity

A local search planner (e.g. ASPEN, rabideau:00) does
not backtrack, but the problem to be solved is the same, so one might
expect that complexity advantages are the same as for the refinement
planner. However, the search operations for the local search planner
can be very different. A
previous study of a technique called
*aggregation* eliminates
search inefficiencies at lower levels of detail in task hierarchies
by operating on hierarchies as single tasks [Knight, Rabideau, Chien, 2000]. Thus, it is not immediately clear what additional
improvements a scheduler could obtained using summary information.
We will show that the improvements are significant, but first we must provide more background on aggregation.

Moving tasks is a central scheduling operation in iterative repair planners. A planner can more effectively schedule tasks by moving related groups of tasks to preserve constraints among them. Hierarchical task representations are a common way of representing these groups and their constraints. Aggregation involves moving a fully detailed abstract task hierarchy while preserving the temporal ordering constraints among the subtasks. Moving individual tasks independently of their parent, siblings, and subtasks is shown to be much less efficient [Knight, Rabideau, Chien, 2000]. Valid placements of the task hierarchy in the schedule are computed from the state and resource usage profiles for the hierarchy and for the other tasks in the context of the movement. A hierarchy's profile represents one instantiation of the decomposition and temporal ordering of the most abstract task in the hierarchy.

Consider a schedule of task hierarchies with a maximum branching
factor expanded to a maximum depth of as shown in Figure
19. Suppose each hierarchy has constraints on
each of variables (states or metric resources). To move a
hierarchy of tasks using aggregation, the scheduler must compute valid intervals
for each resource variable affected by the hierarchy.^{7} The scheduler then
intersects these
intervals to get valid placements for the abstract
tasks and their children. The complexity of computing the set of
valid intervals for a resource is where is the number of
constraints (usages) an abstract task has with its children for the
variable, and is the number of constraints of other tasks in the
schedule on the variable [Knight, Rabideau, Chien, 2000]. With similar
task hierarchies in the entire schedule, then , and the
complexity of computing valid intervals is . But this
computation is done for each of resource variables (often constant
for a domain), so moving a task will have a complexity of .
The intersection of valid intervals across variables does not increase
the complexity. Its complexity is because there can be at
most valid intervals for each timeline; intersecting intervals
for a pair of timelines is linear with the number of intervals; and
only pairs of timelines need to be intersected to get the
intersection of the set.

The summary information of an abstract task represents all of the
constraints of its children, but if the children share constraints
over the same resource, this information is
collapsed into a single *summary* resource usage in the abstract
task. Therefore, when moving an abstract task, the number of
different constraints involved may be far fewer depending on the domain. If
the scheduler is trying to place a summarized abstract task among
other summarized tasks, the computation of valid placement
intervals can be greatly reduced because the in is
smaller. We now consider the two extreme cases where constraints can be
fully collapsed and where they cannot be collapsed at all.

In the case that all tasks in a hierarchy have constraints on the same variable, the number of constraints in a hierarchy is for a hierarchy of depth and branching factor (number of child tasks per parent) . In aggregation, where hierarchies are fully detailed first, this means that the complexity of moving a task is because . Now consider using aggregation for moving a partially expanded hierarchy where the leaves are summarized abstract tasks. If all hierarchies in the schedule are decomposed to level , there are tasks in a hierarchy, each with one summarized constraint representing those of all of the yet undetailed subtasks beneath it for each constraint variable. So , and the complexity of moving the task is . Thus, moving an abstract task using summary information can be a factor of times faster than for aggregation. Because the worst case number of conflicts increases with the number of plan steps (just as with the refinement planner), the worst case complexity of resolving conflicts based on the number of plan steps at level is . Thus (as with refinement planning) using summary information can make speedups of when summary information fully collapses.

The other extreme is when all of the tasks place constraints on different variables. In this case, because any hierarchy can only have one constraint per variable. Fully detailed hierarchies contain different variables, so the complexity of moving a task in this case is . If moving a summarized abstract task where all tasks in the schedule are decomposed to level , is the same because the abstract task summarizes all constraints for each subtask in the hierarchy beneath it, and each of those constraints are on different variables such that no constraints combine when summarized. Thus, the complexity for moving a partially expanded hierarchy is the same as for a fully expanded one. In this case, the number of conflicts also does not change with the depth of the hierarchy because the conflicts are always between pairs of the hierarchies. So, for this other extreme case, summary information does not reduce the complexity of scheduling and would only incur unnecessary overhead.

Other complexity analyses have shown that different forms of hierarchical problem solving, if they do not need to backtrack from lower to higher levels because there are no interacting subproblems, can reduce the size of the search space by an exponential factor [Korf, 1987,Knoblock, 1991]. A planner or scheduler using summary information can witness exponential improvements without this assumption. Backtracking across abstraction levels occurs within the planner/coordinator described in Section 5.1 when the current search state is and another subplan on the same or higher level can be selected. We demonstrated that the search space grows doubly exponentially down the hierarchy because the number of plans grows exponentially, and resolving conflicts grows exponentially with the number of plans. Thus, as long as the planner or coordinator does not have to fully expand all abstract plans to the primitive level and summary information merges at higher levels, the search complexity is reduced at least by a factor of where is the level where the search completed, and is the depth of the hierarchy. yang:97 also suggests ways exponential speedups can be obtained when subplans interact based on hierarchy structure. Our speedups are complementary to these because summary information limits the decomposition of task hierarchies and compresses the information manipulated by a planner or scheduler.