**Advisor:**Guy E. Blelloch**Project:**SCANDAL**Research Interests:**- Design, analysis, and implementation of parallel algorithms, dynamic and irregular parallelism, pipelining, computational biology, randomized algorithms

My research has centered on the design, analysis, and implementations of parallel algorithms. In particular, I am interested in irregular algorithms that use irregular or dynamic data structures such as linked lists, trees, and graphs. Such parallel pointer-based algorithms are likely to become more common for several reasons. Scientific computing is moving to irregular, sparse, and dynamic applications and algorithms. For example, the space and time complexity of sparse problems using regular representations can be substantially reduced using pointer-based representations. Because shared-memory systems with a small number of processors are becoming quite common, there is more demand to move single processor applications to multiple processors. Finally, more general-purpose software solutions are available to improve irregular computation performance.

Over the last 20 years there has been much theoretical work in parallel pointer-based algorithm design. Often these algorithms are asymptotically very fast but are so complex and have such large constant factors that they are useful only for extremely large data sets. In addition, most of this theoretical work is based on the Parallel Random Access Memory (PRAM) machine model. Critics complain that the model is not realistic because it assumes that the time for memory access is comparable to the time for other operations and there is no cost for synchronization. On most multiprocessor machines memory access can be 100 times slower than a floating point operation. If the PRAM is unrealistic, then do PRAM algorithms have any value? What is the performance of PRAM pointer-based algorithms in practice? Do they necessarily have poor performance?

I have been studying two fundamental pointer-based problems, list ranking and maintaining dynamic balanced trees. They pose the same challenges common to most fine-grain pointer-based parallel algorithms: The parallel algorithms can be quite different from their serial counterparts and often having much higher overheads, memory accesses that are hard to optimize and cannot be scheduled at compile time, and huge communication bandwidth requirements. My goal is to show that, by designing PRAM algorithms that are work efficient and have small constants and by using new implementations techniques, parallel pointer-based algorithms can indeed have substantial speedup over fast workstations.

I am also interested in pursuing research in computational biology.

Margaret Reid-Miller Last modified: Wed Aug 12 17:46:25 EDT 1998