Appendix B: Efficient Action Evaluation

In non-generalizing reinforcement learning the cost of executing a single learning step can be neglected. However, algorithms with generalization in the spaces of sensors and/or actuators are not so simple and the execution time of each iteration can be increased substantially. In an extreme case, this increase can limit the reactivity of the learner and this is very dangerous when working with an autonomous robot.

The most expensive procedure of our algorithm is that of computing the value of all actions (i.e., all valid combinations of elementary actions). The cost of this procedure is especially critical since it is used twice in each step: once to get the guess of each action (in the Action Evaluation procedure detailed in Figure 13) and again to get the goodness of the new achieved situation after the action execution (when computing the $ v$ value in the Statistics Update procedure detailed in Figure 14). A trivial re-order of the algorithm can avoid the double use of this expensive procedure at each learning step: we can select the action to be executed next at the same time that we evaluate the goodness of the new achieved situation. The drawback of this re-order is that the action is selected without taking into account the information provided by the last reward value (the goodness of the situation is assessed before the value adjustment). However, this is not a problem in tasks that require many learning steps.

Even if we use the action-evaluation procedure only once per learning step, we have to optimize it as much as possible since the brute-force approach described before, which evaluates each action sequentially, is only feasible for simple problems.

The action-evaluation method presented next is based on the observation that many of the actions would have the same value since the highest relevant partial rule at a given moment would provide the value to all actions that are in accordance with the partial command of the rule. The separate computation of the value of two actions that would end up evaluated using the same rule is a waste of time. This can be avoided by performing the action evaluation attending to the set of active rules in the first place and not to the set of possible action, as the brute-force approach does.

Figure 16: General form of the proposed situation-assessment and action-selection procedure.
\begin{figure}{\small
\begin{center}
\fbox{\parbox{18cm}{
\begin{tabbing}i :\= i...
...\>\>\>{\bf until} $A \neq \emptyset$
\end{tabbing}}}
\end{center}}\end{figure}

Figure 16 shows a general form of the algorithm we propose. In this algorithm, partial rules are considered one at a time, ordered from the most relevant rule to the least relevant one. The partial command of the rule under consideration ($ co_w$) is used to process all the actions that are in accordance with that partial command. This already processed sub-set of actions need not to be considered any more in the action-evaluation procedure. While the rules are processed, we update the current situation assessment ($ v$) and the action to be executed next ($ a$) attending, respectively, to the value prediction ($ q_w$) and the guess ($ g_w$) of the rules.

Observe that partial rules can be maintained sorted by relevance by the statistics update procedure, since it is in this procedure where rule relevance is modified. When the relevance of a rule is changed, its position in the list can be also modified accordingly. In this way we do not have to re-sort the list of rules every time we want to apply the procedure just described.

When elementary actions are of the form $ (m \leftarrow k)$ with $ m$ a motor and $ k$ a value in the range of possible values for that motor, the above algorithm can be implemented in an especially efficient way since there is no need to explicitly compute the set of actions $ A$. In this case (see Figure 17 and 18), we construct a decision tree using motors as a decision attributes and that groups in the same leaf all those actions evaluated by the same partial rule (all actions removed from the set $ A$ in each iteration of the algorithm in Figure 16).

Figure 17: Top level algorithm of the efficient action evaluation algorithm. At the end of the algorithm, $ v$ is the goodness of the current situation to be used in the Statistics Update algorithm (see Figure 14), $ a$ is the action to be executed next and $ guess$ its expected value. The ``Include Rule" procedure is detailed in next figure.
\begin{figure}{\small
\begin{center}
\fbox{\parbox{18cm}{
\begin{tabbing}i :\= i...
...$\\
\>\>\>{\bf until} $closed=open$
\end{tabbing}}}
\end{center}}\end{figure}

Figure 18: The Include rule algorithm searches for nodes from node $ n$ with a partial command compatible with the partial command of rule $ w$ and extend those nodes to insert a leave in the tree.
\begin{figure}{\small
\begin{center}
\fbox{\parbox{18cm}{
\begin{tabbing}i :\= i...
...\bf endif}\\
\par
\>{\bf endif}
\par
\end{tabbing}}}
\end{center}}
\end{figure}

Each internal node of the tree classifies the action according to one of the motor commands included in the action. These internal nodes store the following information:

The leaves of the tree have information about the value of the actions classified in that leaf. This information is represented with the following set of attributes for each leaf:

At a given moment, the inclusion of a new partial rule in the tree produces the specialization of all open nodes compatible with the rule (see Figure 18). We say that an open node $ n$ is compatible with a given rule $ w$ if the partial command of the node $ co_n$ and the partial command of the rule $ co_w$ does not assign different values to the same motor. The specialization of an open node can result in the extension of the node (i.e., new branches are added to the tree under that node) or in the transformation of this node into a leaf. A node is extended when the partial command of the rule affects some motors not included in the partial command of the node. This means that there are some motor values not taken into account in the tree but that have to be used in the action evaluation according to the rule under consideration. When a node is extended, one of the motors not present in the above layers of the tree is used to generate a layer of open nodes in the current node. After that, the node is considered as closed and the inclusion rule procedure is repeated for this node (with different effects because now the node is closed). When all the motors affected by the partial command of the rule are also affected by the partial command of the node, then the node is transformed into a leaf storing the value, guess, and relevance attributes extracted from the information associated with the rule.

The process is stopped as soon as we detect that all nodes have been closed (i.e. all the external nodes of the tree are leaves). In this case, the rules still to be processed can have no effect in the tree form and, consequently are not useful for action evaluation. If a rule is consistently not used for action evaluation, it can be removed from the controller.


Table 2: Set of rules of the controller. The values $ q$ and $ e$ are stored and the $ \rho $ and $ guess$ are computed from them. We define all partial views as $ TRUE_v$ to indicate that they are active in the current time step.
Partial View Partial Command $ q$ $ e$ $ \rho $ $ guess$
$ TRUE_v$ $ (m_1\leftarrow v_1)\wedge (m_2 \leftarrow v_1)$ 5 0.1 0.83 5.1
$ TRUE_v$ $ (m_1\leftarrow v_1) $ 7 0.9 0.52 6.5
$ TRUE_v$ $ (m_2\leftarrow v_1)\wedge (m_3 \leftarrow v_1)$ 8 2.0 0.33 6.0
$ TRUE_v$ $ (m_2\leftarrow v_1) $ 3 3.1 0.24 6.2
$ TRUE_v$ $ (m_1\leftarrow v_0) $ 2 3.5 0.22 5.3
$ TRUE_v$ $ (m_2\leftarrow v_0) $ 10 3.6 0.21 4.1
$ TRUE_v$ $ (m_3\leftarrow v_1) $ 1 4.0 0.20 5.2
$ TRUE_v$ $ (m_3\leftarrow v_0) $ 6 4.5 0.18 12.7


Figure 19: Six different stages during the construction of the tree for action evaluation. Each stage corresponds to the insertion of one rule from Table 2.
\includegraphics[width=1.0\linewidth]{images/figure_jair_A.eps}

A toy-size example can illustrate this tree-based action-evaluation algorithm. Suppose that we have a robot with three motors that accept two different values (named $ v_0$ and $ v_1$). This produces a set of 8 different action. Suppose that, at a given moment, the robot controller includes the set of rules shown in Table 2. In the Action Evaluation algorithm (Figure 17), rules are processed from the most to the least relevant one expanding an initially empty tree using algorithm in Figure 18. The inclusion of a rule in the tree results in an extension of the tree (see stages B, D and E in Figure 19) or in closing branches by converting open nodes into leaves (stages C and F). In this particular case the tree becomes completely closed after processing 5 rules out of the 8 active rules in the controller. At the end of the process, we have a tree with five leaves. Three of them include two actions and the other two only represent a single action. Using the tree we can say that the value of the situation in which the tree is constructed, $ v$, is 8 (this is given by the leaf circled with a solid line in the figure). Additionally, the next action to be executed is of the form $ (m_1\leftarrow v_1,m_2\leftarrow v_0,m_3\leftarrow \sharp)$ where '$ \sharp$' represents any possible action. This optimal action is given by the leaf circled with a dashed line that is the leaf with a larger guess value.

Figure 20: Log of the execution time (in seconds) of the brute-force approach vs. the tree-based one.
\includegraphics[width=0.9\linewidth]{images/originals/figure_jair_11}
\includegraphics[width=0.8\linewidth]{images/figure_jair_11_legend}

The cost of our algorithm largely depends on the specific set of partial rules to be processed. In the worst case, the cost of our algorithm is:

$\displaystyle O(n_r \: l^{n_m}),$    

with $ n_r$ the number of rules, $ n_m$ the number of motors and, $ l$ the maximal range of values accepted by the motors. This is because, in the worst case, to insert a given rule, we have to visit all the nodes of a maximally expanded tree (i.e., a tree where each node has $ l$ subtrees and where all the final nodes of the branches are still opened). The number of nodes of such a tree is

$\displaystyle \sum_{i=0}^{n_m}l^i=\frac{l^{n_m+1}-1}{l-1}=O(l^{n_m}).$    

We can transform the cost expression taking into account that $ l^{n_m}$ is the total number of possible combinations of elementary actions ($ n_c$) or, in other words, the total amount of actions. Therefore, the cost of the presented algorithm is

$\displaystyle O(n_r \: n_c).$    

On the other hand, the cost of the brute-force approach is always

$\displaystyle \Theta(n_r\:n_c).$    

So, in the worst case, the cost of the presented algorithm is of the same order as the cost of the brute-force approach. However, since at most $ l$ rules would be enough to close a maximally expanded tree (one rule for the different values of the motor used in the last still-open layer of the tree), the cost of the tree-based algorithm would be, on average, much smaller than that of the brute-force approach.

Figure 20 exemplifies the different performance of the brute-force action-evaluation procedure and the tree-based one. The figure shows the time taken in the execution of the toy example of Section 6.1. For this experiment, we defined some void motors or motors whose actions have no effect in the environment. As it can be seen, as the number of void motors increases, the cost of the tree-based evaluation is significantly less than that of the brute-force approach.

Josep M Porta 2005-02-17