Test Domains

One of our test domains is the well-known fifteen puzzle problem.
This problem consists of a 4 4 grid containing tiles numbered one to
fifteen, and a single
empty tile position called the *blank* tile. A tile can be moved into the
blank position from an adjacent position, resulting in the four operators
up, down, left, and right. Given the initial and goal configurations, the
problem is to find a sequence of moves to reach the goal.
A sample goal configuration is shown in Figure 11. The Manhattan
Distance Function provides an admissible heuristic for this problem, and
we use the 100 problem instances provided in Korf's test data [25].
^{2}

Our second application domain is the robot arm motion planning problem. Traditional motion planning methods are very costly when applied to a robot arm, because each joint has an infinite number of angles to which it can move from a given configuration, and because collision checking must be performed for each arm segment. Craig provides a detailed description of the calculations necessary to determine the position of the end effector in the 3D workspace given the current joint angles [12]. For our experiments, we use the parameters defined for the Puma 560 robot arm with six degrees of freedom shown in Figure 12. The size and layout of the room is the same for each of our test problems, but we vary the initial and goal arm configurations to generate 20 problem instances. The robot arm path planning problem is particularly difficult because considering every possible arm movement results in a search space with an infinite branching factor. We encode one possible move size for each joint, resulting in a branching factor of 6. The resolution of the moves can be determined by the user, and for these experiments we choose a resolution of 1 degree.

Our third test domain uses the artificial search space in which parameters including branching factor, tree imbalance, solution cost, heuristic error, and left-to-right goal position can be specified by the user. We generate 20 problem instances for use in the experiments.

For our fourth test domain, we integrate our own C-based version of the SNLP nonlinear planner [3] into EUREKA. To conform with the EUREKA architecture, the integrated planner utilizes IDA* search instead of the Best-First search method employed by SNLP. Each plan repair step (filling an open condition or handling a threat) is treated as a separate node in the search space to be explored. We compute the cost of a plan solution as the number of operations and constraints in the plan, and the distance to the goal is estimated using the number of remaining flaws. We select 20 problem instances for our experiments from the blocks-world, Towers of Hanoi, and monkey-and-bananas planning domains.

To create test cases, we run each problem instance multiple times, once for each parallel search strategy in isolation. The search strategy that produces the best speedup is considered to be the ``correct'' classification of the corresponding search tree for C4.5 to learn. Test cases are run on 64 processors of an nCUBE 2 and on 8 distributed workstations using PVM. In the first set of experiments we fix all strategy choices but one and vary the strategy choice under consideration. The default parameter choices are:

- Distribution strategy = Distributed Tree Search
- Number of clusters = 1
- Load balancing = On
- Processor selection = Neighbor
- Percentage of stack distributed upon request = 30%
- Donating strategy = TailOfList
- Anticipatory load balancing trigger = 0
- Ordering = Fixed

We compare the results of C4.5-selected strategies to each strategy used exclusively for all problem instances. Speedup results for various strategy decisions averaged over all problem instances are shown in the following sections. The best average speedup is highlighted in each case. The C4.5 results are captured by performing a ten-fold cross validation on the fifteen puzzle data and a three-fold cross validation on the robot arm motion planning, artificial, and nonlinear planning data sets. Specifically, a decision tree is created from the training cases and used to select the strategy for the test cases. C4.5 speedup is averaged over all test cases for one iteration of this process, and the final values are averaged over cross-validation iterations.