next up previous
Next: Empirical Evaluation: Setup Up: Labelled Uncertainty Graph Heuristics Previous: Same-World Labelled Mutexes

$ LUG$ Heuristics

The heuristics computed on the $ LUG$ can capture measures similar to the $ MG$ heuristics, but there exists a new opportunity to make use of labels to improve heuristic computation efficiency. A single planning graph is sufficient if there is no state aggregation being measured, so we do not mention such measures for the $ LUG$ .

Positive Interaction Aggregation: Unlike $ MG$ heuristics, we do not compute positive interaction based relaxed plans on the $ LUG$ . The $ MG$ approach to measure positive interaction across each state in a belief state is to compute multiple relaxed plans and take their maximum value. To get the same measure on the $ LUG$ we would still need to extract multiple relaxed plans, the situation we are trying to avoid by using the $ LUG$ . While the graph construction overhead may be lowered by using the $ LUG$ , the heuristic computation could take too long. Hence, we do not compute relaxed plans on the $ LUG$ to measure positive interaction alone, but we do compute relaxed plans that measure overlap (which measures positive interaction).

Independence Aggregation: Like positive interaction aggregation, we need a relaxed plan for every state in the projected belief state to find the summation of the costs. Hence, we do not compute relaxed plans that assume independence.

State Overlap Aggregation: A relaxed plan extracted from the $ LUG$ to get the $ h^{LUG}_{RP}$ heuristic resembles the unioned relaxed plan in the $ h^{MG}_{RPU}$ heuristic. Recall that the $ h^{MG}_{RPU}$ heuristic extracts a relaxed plan from each of the multiple planning graphs (one for each possible world) and unions the set of actions chosen at each level in each of the relaxed plans. The $ LUG$ relaxed plan heuristic is similar in that it counts actions that have positive interaction in multiple worlds only once and accounts for independent actions that are used in subsets of the possible worlds. The advantage of $ h^{LUG}_{RP}$ is that we find these actions with a single pass on one planning graph.

We are trading the cost of computing multiple relaxed plans for the cost of manipulating $ LUG$ labels to determine what lines of causal support are used in what worlds. In the relaxed plan we want to support the goal with every state in $ BS_P$ , but in doing so we need to track which states in $ BS_P$ use which paths in the planning graph. A subgoal may have several different (and possibly overlapping) paths from the worlds in $ BS_P$ .

A $ LUG$ relaxed plan is a set of layers: $ \{{\cal A}^{RP}_0, {\cal
E}^{RP}_0, {\cal L}^{RP}_{1}, ...,$ $ {\cal A}^{RP}_{b-1}, {\cal
E}^{RP}_{b-1},$ $ {\cal L}^{RP}_{b}\}$ , where $ {\cal A}^{RP}_r$ is a set of actions, $ {\cal E}^{RP}_r$ is a set of effects, and $ {\cal L}^{RP}_{r+1}$ is a set of clauses. The elements of the layers are labelled to indicate the worlds of $ BS_P$ where they are chosen for support. The relaxed plan is extracted from the level $ b = h^{LUG}_{level}(BS_i)$ (i.e., the first level where $ BS_i$ is reachable, also described in Appendix A).

Please note that we are extracting the relaxed plan for $ BS_i$ in terms of clauses, and not literals, which is different than the $ SG$ and $ MG$ versions of relaxed plans. Previously we found the constituent of $ BS_i$ that was first reached on a planning graph and now we do not commit to any one constituent. Our rationale is that we were possibly using different constituents in each of the multiple graphs, and in this condensed version of the multiple graphs we still want to be able to support different constituents of the $ BS_i$ in different worlds. We could also use the constituent representation of $ BS_i$ in defining the layers of the relaxed plan, but choose the clausal representation of $ BS_i$ instead because we know that we have to support each clause. However with constituents we know we only need to support one (but we don't need to know which one).

The relaxed plan, shown in bold in Figure 7, for $ BS_I$ to reach $ BS_G$ in CBTC is listed as follows:

$\displaystyle \noindent\begin{array}{l}\notag
 {\cal A}^{RP}_0 = \{{\tt inP1}_p...
...{\tt arm}) = BS_P,\ 
 \quad\ell^{RP}_2(\neg {\tt clog}) = BS_P\ 

We start by forming $ {\cal L}_2^{RP}$ with the clauses in $ \kappa(BS_G)$ , namely $ \neg$ arm and $ \neg$ clog; we label the clauses with $ BS_P$ because they need to be supported by all states in our belief state. Next, we support each clause in $ {\cal L}_2^{RP}$ with the relevant effects from $ {\cal E}_1$ to form $ {\cal E}_1^{RP}$ . For $ \neg$ clog we use persistence because it supports $ \neg$ clog in all worlds described by $ BS_P$ (this is an example of positive interaction of worlds). For $ \neg$ arm the relevant effects are the respective $ \varphi^1$ from each Dunk action. We choose both effects to support $ \neg$ arm because we need to support $ \neg$ arm in all worlds of $ BS_P$ , and each effect gives support in only one world (this is an example of independence of worlds). We then insert the actions associated with each chosen effect into $ {\cal A}_1^{RP}$ with the appropriate label indicating the worlds where it was needed, which in general is fewer worlds than where it is reachable (i.e. it is always the case that $ \ell^{RP}_r(\cdot) \models \ell_r(\cdot)$ ). Next we form $ {\cal L}_1^{RP}$ with the execution preconditions of actions in $ {\cal A}_1^{RP}$ and antecedents of effects in $ {\cal E}_1^{RP}$ , which are $ \neg$ clog, inP1, and inP2, labelled with all worlds where an action or effect needed them. In the same fashion as level two, we support the literals at level one, using persistence for inP1 and inP2, and Flush for $ \neg$ clog. We stop here, because we have supported all clauses at level one.

For the general case, extraction starts at the level $ b$ where $ BS_i$ is first reachable from $ BS_P$ . The first relaxed plan layers we construct are $ {\cal A}^{RP}_{b-1}, {\cal E}^{RP}_{b-1},
{\cal L}^{RP}_{b}$ , where $ {\cal L}^{RP}_{b}$ contains all clauses $ C \in \kappa(BS_i)$ , labelled as $ \ell^{RP}_k(C) = BS_P$ .

For each level $ r$ , $ 1 \leq r \leq b$ , we support each clause in $ {\cal L}^{RP}_{r}$ by choosing relevant effects from $ {\cal
E}_{r-1}$ to form $ {\cal E}^{RP}_{r-1}$ . An effect $ \varphi^j(a)$ is relevant if it is reachable in some of the worlds where we need to support $ C$ (i.e. $ \ell_{r-1}(\varphi^j(a)) \wedge
\ell^{RP}_{r}(C) \not= \perp$ ) and the consequent gives a literal $ l
\in C$ . For each clause, we have to choose enough supporting effects so that the chosen effect worlds are a superset of the worlds we need to support the clause, formally:

$\displaystyle \forall_{C \in {\cal L}^{RP}_{r}} \ell^{RP}_r(C) \models
...n C,\ \varphi^j(a) \in {\cal E}_{r-1} }}

We think of supporting a clause in a set of worlds as a set cover problem where effects cover subsets of worlds. Our algorithm to cover the worlds of a clause with worlds of effects is a variant of the well known greedy algorithm for set cover [12]. We first choose all relevant persistence effects that can cover worlds, then choose action effects that cover the most new worlds. Each effect we choose for support is added to $ {\cal E}^{RP}_{r-1}$ and labelled with the new worlds it covered for $ C$ . Once all clauses in $ {\cal L}^{RP}_{r}$ are covered, we form the action layer $ {\cal
A}^{RP}_{r-1}$ as all actions that have an effect in $ {\cal E}^{RP}_{r-1}$ . The actions in $ {\cal
A}^{RP}_{r-1}$ are labelled to indicate all worlds where any of their effects were labelled in $ {\cal E}^{RP}_{r-1}$ .

We obtain the next subgoal layer, $ {\cal L}^{RP}_{r-1}$ , by adding literals from the execution preconditions of actions in $ {\cal
A}^{RP}_{r-1}$ and antecedents of effects in $ {\cal E}^{RP}_{r-1}$ . Each literal $ l \in {\cal L}^{RP}_{r-1}$ is labelled to indicate all worlds where any action or effect requires $ l$ . We support the literals in $ {\cal L}^{RP}_{r-1}$ in the same fashion as $ {\cal L}^{RP}_{r}$ . We continue to support literals with effects, insert actions, and insert action and effect preconditions until we have supported all literals in $ {\cal L}^{RP}_{1}$ .

Once we get a relaxed plan, the relaxed plan heuristic, $ h_{RP}^{LUG}(BS_i)$ , is the summation of the number of actions in each action layer, formally:

$\displaystyle h^{LUG}_{RP}(BS_i) = \sum\limits_{i = 0}^{b-1} \mid{{\cal

Thus in our CBTC example we have $ h^{LUG}_{RP}(BS_G) = 3$ . Notice that if we construct the $ LUG$ without mutexes for CBTC we reach the goal after two layers. If we had included mutexes the $ LUG$ , then it would reach the goal after three layers. The way we use mutexes will not change our relaxed plan because we do not use mutexes to influence relaxed plan extraction. Mutexes only help to identify when a the belief state $ BS_i$ is not reachable from $ BS_P$ .

next up previous
Next: Empirical Evaluation: Setup Up: Labelled Uncertainty Graph Heuristics Previous: Same-World Labelled Mutexes