*School of Computer Science*

*Carnegie Mellon University*

*Pittsburgh, PA, 15213*

Archimedes is a set of tools, including mesh generators and a prototype parallelizing compiler, for performing unstructured finite element simulations. This page describes the motivation and context for Archimedes, illustrates its basic structure, and acknowledges the researchers who contributed to its development.

The central tool is *Author*, a parallelizing
compiler for unstructured finite element problems. It runs on Unix
workstations and has been targeted to the Intel iWarp,
the Intel Paragon, and workstation clusters.

Figure 1 shows the overall structure of Archimedes. Input
consists of a description of the problem geometry and a sequential
high level description of the finite element algorithm, written in
terms of a global system of equations, without any explicit communication
(*send* or *receive*) statements.

**Figure 1:** The structure of Archimedes

The area in yellow is Archimedes. The problem geometry is automatically discretized into an unstructured mesh, which is then partitioned into subdomains, one subdomain for each processor. Archimedes currently supports quality 2D mesh generation and a limited form of 3D mesh generation; implementation of a quality 3D mesher is underway. Finally, based on a high-level finite element algorithm provided by application programmers, code is generated for the individual processors.

See the
guitar example
for more details on the individual steps.

Here is an example Archimedes program from the CMU Quake project that simulates earthquake induced ground motion in the San Fernando Valley. The key idea is that it looks like a sequential program, but it can be compiled and run at high efficiency without modification on an arbitrary number of processors on any parallel system with a C compiler and an MPI library.

A short HTML paper on the Quake page summarizes the performance of this code on 256 processors of a Cray T3D.

Jacobo Bielak characterized the earthquake ground motion modeling application that motivated Archimedes. Omar Ghattas provided expertise in the method of finite elements, iterative methods, and domain decomposition.

Jim Ruppert developed the 2D mesh generation algorithm that Slice uses. Gary Miller, Shang-Hua Teng, William Thurston, and Stephen Vavasis developed the geometric partitioning algorithm, and Eric Schwabe wrote an earlier version of Slice. Tom Warfel developed the communication context switching code for iWarp. David O'Hallaron, Tom Stricker, and Jim Stichnoth developed low-level message passing software for the iWarp. Jim Stichnoth is also currently developing a retargettable backend for parallelizing compilers, that will eventually be integrated into Author.

Jifeng Xu and Jacobo Bielak were the first to use Archimedes to do real science. Hesheng Bao and Loukas Kallivokas used Archimedes to develop an earthquake ground motion simulation on the T3D with 13.5 million nodes and 77 million tetrahedral elements, one of the largest unstructured finite element applications ever developed. HT Kung, Thomas Gross, and David O'Hallaron led the CMU team that developed the iWarp system with Intel. iWarp was the first target machine for Archimedes.

accesses since March 1998.