next up previous
Next: Acknowledgements Up: No Title Previous: Machine Learning Comparison

   
Conclusions and Future Work

This paper reports on work performed to combine the benefits of parallel search approaches in the EUREKA system. Experimentation reveals that strategies developed over the last few years offer distinct benefits to improving the performance of AI applications. However, while any particular algorithm can provide significant speedup for one type of problem, on other problems these algorithms can actually produce worse results than using a serial version of the search algorithm. As a result, these strategies need to be carefully chosen based on the characteristics of a particular problem.

In this paper we demonstrate the ability of EUREKA to automatically select parallel search strategies and set appropriate parameters. EUREKA employs the C4.5 machine learning system to make an intelligent choice of strategies for each new problem instance. Experiments from the domains of the fifteen puzzle problem, robot arm motion planning, an artificial search space, and planning problems indicate that EUREKA yields both improved speedup results and significantly improved classification accuracies over any strategy used in isolation when high-variance training cases are used. These experiments also demonstrate EUREKA's ability to select all parameters at once and to outperform any fixed strategy over a set of problem instances. In addition, we demonstrate that EUREKA can benefit by utilizing training cases drawn from a variety of test domains, thus we would expect the performance of the system to improve even more as we incorporate data from new problem domains and architectures.

Two of the challenges introduced by our research are the ability to determine discriminating features and the ability to provide clear classifications for training examples. In our experiments we verified that the features we chose could be measured early during the search process and still be representative of the problem space later on during execution. As we apply our approach to a greater variety of problems we will need to develop a more formal methodology for selecting representative and discriminating features. In addition, we observed dramatic performance improvements when test cases were drawn from problem instances with clear classifications. We would like to pursue methods of learning from instances that exhibit low variation in performance of alternative strategies and high variation in problem size.

The current implementation of EUREKA focuses on an IDA* approach to search. One reason for this choice of search method is the linear memory requirements of the algorithm. A second advantage of this search method is that an iterative deepening search method provides feedback in each iteration that can be used to adjust parameters for the next search iteration. As a result, EUREKA can potentially adjust the strategy choices from one iteration of the search algorithm to the next as features of the space vary. However, in some problem domains non-iterative search algorithms may be preferred. A future challenge for our research is to refine the adaptive parallel search algorithm for use in a greater variety of iterative and non-iterative search algorithms.

EUREKA implementations are currently available on a variety of architectural platforms including MIMD distributed memory and shared memory multiprocessors, a distributed network of machines running PVM, Posix multithreading machines, and machines using Java threads and Cilk threads. Problem domains currently under investigation include additional combinatorial optimization problems such as the n-Queens problem and integration of machine learning, theorem proving, and natural language algorithms into this search architecture. We hope to demonstrate that parallel heuristic search algorithms can yield both optimal and scalable approaches to planning, machine learning, natural language, theorem proving, and many other computation-intensive areas of AI.


next up previous
Next: Acknowledgements Up: No Title Previous: Machine Learning Comparison