next up previous
Next: 4 Identifying Abstract Solutions Up: 3.5 Summary Resource Usage Previous: 3.5.1 Representation


3.5.2 Resource Summarization Algorithm

The state summarization algorithm in Section 3.4 recursively propagates summary conditions upwards from an $and$/$or$ hierarchy's leaves, and the algorithm for resource summarization takes the same approach. Starting at the leaves, the algorithm finds primitive tasks that use constant amounts of a resource. The resource summary of a task using $x$ units of a resource is $\langle[x$,$x]$,$[x$,$x]$,$[0$,$0]\rangle$ or $\langle[x$,$x]$,$[x$,$x]$,$[x$,$x]\rangle$ over the task's duration for non-consumable or consumable resources respectively.

Moving up the $and$/$or$ tree, the summarization algorithm either comes to an $and$ or an $or$ branch. For an $or$ branch the combined summary usage comes from the $or$ computation \begin{displaymath}
\setlength{\arraycolsep}{0in}
\begin{array}[t]{lllll}
\lang...
... & max_{c \in children}(ub(persist(c)))]&\ \rangle,
\end{array}\end{displaymath} where $lb()$ and $ub()$ extract the lower bound and upper bound of a range respectively. The $children$ denote the branch's children with their durations extended to the length of the longest child. This duration extension alters a child's resource summary information because the child's usage profile has a zero resource usage during the extension. For instance, in determining the resource usage for $move$(A,B), the algorithm combines two 40 minute tasks with a 50 minute task. The resulting summary information describes a 50 minute abstract task whose profile might have a zero watt power usage for 10 minutes. This extension is why $move$(A,B) has a [0,4] for a $local\_min$ instead of [3,4]. Planners that reason about variable durations could use [3,4] for a duration ranging from 40 to 50.

Computing an $and$ branch's summary information is a bit more complicated due to timing choices among loosely constrained subtasks. The take $x$ path examples illustrate the simplest sub-case, where subtasks are tightly constrained to execute serially. Here profiles are appended together, and the resulting summary usage information comes from the SERIAL-AND computation \begin{displaymath}
\setlength{\arraycolsep}{0in}
\begin{array}[t]{lllll}
\lang...
...a_{c \in children}(ub(persist(c)))] & \ \rangle,\\
\end{array}\end{displaymath} where $\Sigma_{lb}^{pre}(c)$ and $\Sigma_{ub}^{pre}(c)$ are the respective lower and upper bounds on the cumulative persistent usages of children that execute before $c$. These computations have the same form as the $\Sigma$ computations for the final $persist$.

The case where all subtasks execute in parallel and have identical durations is slightly simpler. Here the usage profiles add together, and the branch's resultant summary usage comes from the PARALLEL-AND computation \begin{displaymath}
\setlength{\arraycolsep}{0in}
\begin{array}[t]{lllll}
\lang...
...a_{c \in children}(ub(persist(c)))] & \ \rangle,\\
\end{array}\end{displaymath} where $\Sigma_{ub}^{non}(c)$ and $\Sigma_{lb}^{non}(c)$ are the respective sums of the $local\_max$ upper bounds and the $local\_min$ lower bounds for all children except $c$.

To handle $and$ tasks with loose temporal constraints, we consider all legal orderings of child task endpoints. For example, in the rover's early morning tasks, there are three serial solar energy collection subtasks running in parallel with a subtask to drive to location $B$. Figure 12 shows one possible ordering of the subtask endpoints, which breaks $move$(A,B) into three pieces, and two of the soak rays children in half. Given an ordering, the summarization algorithm can (1) use the endpoints of the children to determine subintervals, (2) compute summary information for each child task/subinterval combination, (3) combine the parallel subinterval summaries using the PARALLEL-AND computation, and then (4) chain the subintervals together using the SERIAL-AND computation. Finally, the $and$ task's summary is computed by combining the summaries for all possible orderings using an $or$ computation.

Figure 12: Possible task ordering for a rover's morning activities, with resulting subintervals.
\begin{figure}\centerline{\psfig{figure=taskorder1.eps,height=1.0in}}\end{figure}

Here we describe how step (2) generates different summary resource usages for the subintervals of a child task. A child task with summary resource usage $\langle[a$,$b]$,$[c$,$d]$,$[e$,$f]\rangle$ contributes one of two summary resource usages to each intersecting subinterval4: \begin{displaymath}
\langle[a,b],[c,d],[0,0]\rangle,\langle[a,d],[a,d],[0,0]\rangle.
\end{displaymath} While the first usage has the tighter [$a$,$b$],[$c$,$d$] local ranges, the second has looser [$a$,$d$],[$a$,$d$] local ranges. Since the $b$ and $c$ bounds only apply to the subintervals containing the subtask's minimum and maximum usages, the tighter ranges apply to one of a subtask's intersecting subintervals. While the minimum and maximum usages may not occur in the same subinterval, symmetry arguments let us connect them in the computation. Thus one subinterval has tighter local ranges and all other intersecting subintervals get the looser local ranges, and the extra complexity comes from having to investigate all subtask/subinterval assignment options. For instance, there are three subintervals intersecting $move$(A,B) in Figure 12, and three different assignments of summary resource usages to the subintervals: placing [0,4],[4,6] in one subinterval with [0,6],[0,6] in the other two. These placement options result in a subtask with $n$ subintervals having $n$ possible subinterval assignments. So if there are $m$ child tasks each with $n$ alternate assignments, then there are $n^{m}$ combinations of potential subtask/subinterval summary resource usage assignments. Thus propagating summary information through an $and$ branch is exponential in the number of subtasks with multiple internal subintervals. However since the number of subtasks is controlled by the domain modeler and is usually bounded by a constant, this computation is tractable. In addition, summary information can often be derived offline for a domain. The propagation algorithm takes on the form:

Now that we have described how to derive summary information, we can discuss how to use it.


next up previous
Next: 4 Identifying Abstract Solutions Up: 3.5 Summary Resource Usage Previous: 3.5.1 Representation
Bradley Clement 2006-12-29