Date: Tue, 10 Dec 1996 22:50:38 GMT Server: NCSA/1.4.2 Content-type: text/html Last-modified: Fri, 18 Oct 1996 18:44:03 GMT Content-length: 7679
I'm in
my third year of graduate school at the UW. I am now starting to explore
the new area of phased
compilation, with Susan
Eggers as my advisor, and am part of the UW
Dynamic Compilation Group. Phased compilation encompasses such
ideas as partial
evaluation, incremental specialization, run-time
code generation, and dynamic
compilation (these concepts overlap). I plan to do my generals
and my thesis in this area, and hope to graduate around the time our
new building is
completed (or soon after).
In addition to phased compilation, lately I've been interested in extensible operating systems and distributed systems. This past spring, I read up on techniques for creating and exploiting instruction-level parallelism (ILP). Many of the papers came from the IMPACT group at UIUC.
In the past I've been interested in parallel programming languages and environments, parallel algorithms, and scientific computing. I'm also concerned about the impact of computing technologies on society.
In spring 1995, I finished my quals project under Richard Ladner, Design and Analysis of Collective-Communication Primitives. I have a few more ideas that I'm exploring in this area and I should have a paper available later this year.
In my second year, I also did a cool project with Eric Anderson for the distributed and parallel systems course. It was a general-purpose, user-programmable, kernel-resident packet processor (boy, what a mouthful!). It was a good idea, but its performance suffered a little due to a massive, sparse switch statement that was converted to a series of tests and branches by the compiler. We never had time to rewrite it and collect some real times, but we're sure they would have been great. Actually, dynamic compilation would have been ideal for this application. We also had some other ideas for optimizing the computation of which packet processors should execute on a given packet, but development on SPIN's networking framework was beginning by the end of the quarter, so we didn't bother to pursue it further.
During my first year at UW, when my office was in Sieg, I was a teaching assistant, first for CSE/ENGR-142, the first-semester programming course (when it was in Ada), then for CSE-451, the undergraduate operating systems course. Now, my office is in The Chateau, and I am supported by a NSF Graduate Fellowship.
I spent three summers at Lawrence
Livermore National Laboratory. My first summer at LLNL, I studied
cluster message-passing systems, mainly PVM. During the next two
summers, while working on my main project, a parallel climate model, I
looked into MPI,
even attending one of the MPI Forums in Dallas. I gave a tutorial on MPI at LLNL, and
later another at Boeing Computing
Services.
The climate model I worked on was a parallel global climate model called the Earth System Model (ESM). The ESM is a framework for supporting parallel global climate modeling. The requirements of the ESM project are that the model be transportable, scalable, and modular. The transportability of the code was deemed to be crucial, as this would allow the group to take advantage of the latest, most powerful multicomputers available. I ported the code to MPI, instrumented the code for post-mortem performance evaluation using ParaGraph, Upshot, Pablo, or some of my own tools, and worked on building a library of portable collective-communication routines.
I got my undergraduate degree in computer science and mathematics at Purdue University. My senior year I
was president of the Purdue Student Chapter of
the ACM. I also helped start a UPE chapter, and
served as the chapter's second president. Additionally, I helped to
resurrect the defunct math club by serving
as its treasurer, then vice president, and by helping to make it a
student chapter of the Mathematical
Association of America.