Marco Zagha, working with Guy E. Blelloch and the SCANDAL project

B.S., Computer Science and Engineering, University of California at Los Angeles, 1988
M.S., Computer Science, School of Computer Science, Carnegie Mellon University, 1991

Research Interests: supercomputers, parallel programming, (practical) parallel algorithms, parallel architectures, vector architectures, parallel languages


My research centers on techniques for implementing algorithms and languages on parallel supercomputers. I am especially interested in irregular algorithms, that is, algorithms that manipulate irregular data structures (such as trees, graphs, and sparse matrices) or solve combinatorial problems (such as sorting). Efficient irregular computation is important for achieving high performance on applications such as particle simulation and finite element analysis and for using supercomputers on a broader range of scientific and commercial applications.

Although there is a large body of theoretical work on irregular algorithms, many issues must be addressed in order to bridge the gap between theoretical algorithms and practical implementations. The most important issues include load-balancing, reducing constant factors, and restructuring communication. For example, algorithms designed for theoretical shared-memory models require frequent global memory references using dynamic access patterns. Because these algorithms typically require more communication bandwidth than most supercomputers can provide, the communication must be restructured for the target architecture in order to obtain an efficient implementation.

Over the last few years, I have been working on high-performance implementations of irregular algorithms on a variety of supercomputers, including the Connection Machines CM-2 and CM-5 and the Cray Y-MP/C90. For my thesis, I am focusing on pipelined-memory multiprocessors with interleaved memory systems, such as the Cray C90, NEC SX-3, and Tera, primarily because these machines support global communication at rates exceeding current massively parallel architectures. The main goal of my thesis is to show that with new programming techniques, pipelined-memory multiprocessors can efficiently execute a wide variety of irregular algorithms. In addition to working on techniques for implementing irregular algorithms, I have also been working with the Scandal project on implementing data-parallel languages. I have implemented a library of segmented vector operations on the Cray C90 to support NESL, a portable, nested data-parallel language.

Other research interests I would like to pursue someday include parallel architecture, processor and memory system architecture, and computer music.