The experiment was run in phases, each phase corresponding to an increase in problem size. Thirty test problems of each size were randomly generated. Since it is not possible to obtain a truly random distribution within a nonartificial domain, the following strategy was adopted for problem generation. First, the initial state was constructed by fixing the number objects of each type contained in the domain description. For example, in the first experiment, there were six cities (12 locations within cities), six planes, and six trucks. The initial state of each problem was constructed by first including filter conditions (nonachievable conditions). These defined the layout of the cities. For example, the condition (IS-A AIRPORT AP1) identified AP1 as an airport. The condition (SAME-CITY AP1 PO1) indicated that AP1 and PO1 were located in the same city. Second, the achievable (non-filter) conditions that are present in the add clauses of the domain operators were varied for each problem by choosing object constants randomly from those available with the restriction that no two initial state conditions were inconsistent. For example, each plane and package was assigned to a single randomly-chosen location. Goals were chosen from among these achievable conditions in the same manner. Although no attempt was made to create interacting goals, goal interaction was common in the multi-goal problems. This was because a limit was imposed on the number of steps in the plan. It meant that multi-goal problems often could not be solved by concatenating subplans for individual subgoals. In these instances, the planner could take advantage of linking opportunities and achieve multiple goals through common steps. It also meant that often the planner had to backtrack over a derivation for one goal in order to solve an additional goal.
The first experiment used the 6-city domain and was run in 6 phases. The size of the test problems (which ranged from 1 to 6 goals) was increased for each phase. Prior to each phase n of the experiment, the case library was emptied and the planner was retrained on randomly generated problems of size n. Training problems were solved by attempting single-goal subproblems from scratch, storing a trace of the derivation of the solution to the problem if one was not already present in the library, and then successively adding an extra goal. Multi-goal problems were stored only when retrieved cases used in solving the problem failed. Whenever a problem could not be solved through sequenced replay of previous cases, the negatively interacting goals contained in the failure reason were identified and a new case achieving these goals alone was stored in the library. In each phase of the experiment, the planner was tested on the same 30 randomly generated test problems after varying amounts of training. The problems were solved both in from-scratch mode and with replay of multiple cases retrieved from the library which had been constructed during training.
A second experiment which tested the planner on a more complex 15 city domain employed a stable case library formed when DERSNLP+EBL was trained on 120 (6 city, 6 goal) logistics transportation problems. This library of smaller problems was then used when the planner was tested on the larger (15 city) problems ranging from 6 to 10 goals.