If you're in Soda Hall, you'll find copies of most of my papers hanging outside
my office (625). Take the ones you want.
FAR AND AWAY MY MOST POPULAR PAPER is my introduction to the
conjugate gradient method.
This report is an exercise in trying to make
a difficult subject as transparent and easy to understand as humanly possible.
It includes sixtysix illustrations and as much intuition as I can provide.
How could fifteen lines of pseudocode take fifty pages to explain?
Also available is a set of fullpage figures from the paper,
which may be printed on transparencies for classroom use.

Jonathan Richard Shewchuk,
An Introduction to the Conjugate Gradient Method Without the Agonizing
Pain, August 1994.
Abstract,
PostScript (1,716k, 58 pages),
PDF (516k, 58 pages),
PostScript of classroom figures (1,409k, 37 pages).
PDF of classroom figures (394k, 37 pages).
Delaunay Mesh
Generation
“I hate meshes. I cannot believe how hard this is.
Geometry is hard.”
— David Baraff, Senior Research Scientist, Pixar Animation Studios
DELAUNAY REFINEMENT MESH GENERATION ALGORITHMS
construct meshes of triangles or tetrahedra (“elements”)
that are suitable for applications like interpolation, rendering,
terrain databases, geographic information systems,
and most demandingly, the solution of partial differential equations
by the finite element method.
Delaunay refinement algorithms operate by maintaining a Delaunay or
constrained Delaunay triangulation which is refined by inserting
additional vertices until the mesh meets constraints
on element quality and size.
These algorithms simultaneously offer theoretical bounds on element quality,
edge lengths, and spatial grading of element sizes;
the ability to triangulate general straightline domains
(and not just polygons/polyhedra with holes);
and truly satisfying performance in practice.
The following papers include theoretical treatments of Delaunay refinement
and discussions of the implementation details of
my twodimensional mesh generator and Delaunay triangulator, Triangle,
and my threedimensional mesh generator and Delaunay tetrahedralizer,
Pyramid.
See the
Triangle page for information about what Triangle can do,
or to obtain the C source code.

Delaunay Refinement Algorithms for Triangular
Mesh Generation, Computational Geometry:
Theory and Applications 22(13):2174, May 2002.
PostScript (5,128k, 54 pages),
PDF (1,046k, 54 pages).
My ultimate article on twodimensional Delaunay
refinement, including a full theoretical treatment plus
extensive pseudocode. This is the first one to read if you want to implement
a triangular Delaunay refinement mesh generator.
(See Chapter 5 of my dissertation for data structures, though.)

Delaunay Refinement Mesh Generation,
Ph.D. thesis, Technical Report CMUCS97137,
School of Computer Science, Carnegie Mellon
University, Pittsburgh, Pennsylvania, 18 May 1997.
Abstract (with BibTeX citation),
compressed PostScript (1,768k, 207 pages)
(expands to 10,435k when uncompressed),
PDF (2,635k, 207 pages).
Includes extensive treatment of Delaunay triangulations,
two and threedimensional Delaunay refinement algorithms,
and implementation details.
However, Chapter 3 on twodimensional Delaunay refinement is superseded
by the muchimproved article above.

Tetrahedral Mesh Generation by Delaunay Refinement,
Proceedings of the Fourteenth Annual Symposium on
Computational Geometry (Minneapolis, Minnesota), pages 8695,
Association for Computing Machinery, June 1998.
PostScript (1,504k, 10 pages),
PDF (299k, 10 pages).
A description of the core threedimensional mesh generation algorithm
used in Pyramid, for those who want a quick overview with less detail.
A more thorough treatment appears in Chapter 4 of my dissertation.

Triangle: Engineering a 2D Quality Mesh
Generator and Delaunay Triangulator,
in Applied Computational Geometry: Towards Geometric Engineering
(Ming C. Lin and Dinesh Manocha, editors),
volume 1148 of Lecture Notes in Computer Science,
pages 203222, SpringerVerlag (Berlin), May 1996.
From the First Workshop on Applied Computational Geometry
(Philadelphia, Pennsylvania).
Abstract (with BibTeX citation),
PostScript (513k, 10 pages),
PDF (153k, 10 pages),
HTML.
A short paper on Triangle, for those who want a quick overview
with less detail.
All this material is scattered through my dissertation as well.

François Labelle and Jonathan Richard Shewchuk,
Anisotropic Voronoi Diagrams and
GuaranteedQuality Anisotropic Mesh Generation,
Proceedings of the Nineteenth Annual Symposium on
Computational Geometry (San Diego, California), pages 191200,
Association for Computing Machinery, June 2003.
PostScript (910k, 10 pages),
PDF (284k, 10 pages).
The best triangulations for interpolation and numerical modeling
are often anisotropic: long and skinny, oriented in directions
dictated by the function being approximated.
(See my “What Is a Good Linear Element?” papers below for details.)
The ideal orientations and aspect ratios of the elements may vary
greatly from one position to another.
This paper discusses a Voronoi refinement algorithm for
provably good anisotropic mesh generation.
The skewed elements are generated with the help of
anisotropic Voronoi diagrams,
wherein each site has its own distorted distance metric.
Anisotropic Voronoi diagrams are somewhat badly behaved,
and do not always dualize to triangulations.
We call it “Voronoi refinement” because the diagrams
can be tamed by inserting new sites.
After they are carefully refined, they dualize to guaranteedquality
anisotropic triangular meshes.

Talk slides:
Anisotropic Voronoi Diagrams and
GuaranteedQuality Anisotropic Mesh Generation.
PDF (color, 1,301k, 47 pages).
Slides from a talk on our paper above.
You can even
watch
me give a shorter version of this talk at
the Mathematical Sciences Research Institute.

Mesh Generation for Domains with Small Angles,
Proceedings of the Sixteenth Annual Symposium on
Computational Geometry (Hong Kong), pages 110,
Association for Computing Machinery, June 2000.
PostScript (663k, 10 pages),
PDF (237k, 10 pages).
How to adapt Delaunay refinement algorithms to domains that are difficult
to mesh because they have small angles.
The twodimensional portion of this paper is superseded by the
improved writing in “Delaunay Refinement Algorithms for Triangular Mesh
Generation,” above.
The threedimensional portion is still found only here.

Star Splaying: An Algorithm for Repairing
Delaunay Triangulations and Convex Hulls,
Proceedings of the TwentyFirst Annual Symposium on
Computational Geometry (Pisa, Italy), pages 237246,
Association for Computing Machinery, June 2005.
PostScript (392k, 10 pages),
PDF (209k, 10 pages).
Star splaying is a generaldimensional algorithm for
fixing broken Delaunay triangulations and convex hulls.
Its input is a triangulation, an approximate convex hull,
or even just a set of vertices and guesses about who their neighbors are.
If the input is “nearly Delaunay” or “nearly convex,”
star splaying is quite fast,
so I call it a “Delaunay repair” algorithm.
Star splaying is designed for dynamic mesh generation,
to repair the quality of a finite element mesh
that has lost the Delaunay property after its vertices have moved
in response to simulated physical forces.
Star splaying is akin to Lawson's edge flip algorithm for converting
a twodimensional triangulation to a Delaunay triangulation,
but it works in any dimensionality.

Talk slides:
Theoretically Guaranteed Delaunay Mesh Generation—In Practice,
September 2005.
PDF (color, 2,003k, 106 pages).
Slides from a short course I gave at
the Thirteenth and Fourteenth International Meshing Roundtables.
Not just about my work!
This is a survey of what I see as the most central contributions
to provably good mesh generation, restricted to algorithms
that have been demonstrated to work in practice as well as in theory.
It includes full citations for all the featured algorithms,
and is designed to be a fast, readable introduction to the area.
Topics covered include a short review of Delaunay triangulations and
constrained Delaunay triangulations; extensive coverage of Delaunay refinement
algorithms for triangular and tetrahedral mesh generation, including methods by
Chew, Ruppert, Boivin/OllivierGooch, Miller/Walkington/Pav, Üngör,
and me;
handling of 2D domains with curved boundaries; handling of 2D and 3D domains
with small angles; sliver elimination; and a brief discussion of how to prove
that a Delaunay refinement algorithm eliminates bad elements.
NonDelaunay
Mesh Generation
THE GREAT CHALLENGE OF TETRAHEDRAL MESH GENERATION
is to create tetrahedra whose dihedral angles are never too small nor
too large. Although Delaunay refinement usually does this well in practice,
there's lots of room for improvement—both in obtaining mathematically
guaranteed bounds on angles, and in further improving the angles beyond
what Delaunay algorithms or guaranteedquality algorithms can provide.
Our isosurface stuffing algorithm comes with theoretical guarantees
on dihedral angles. Through mesh improvement procedures, we can make them
even better in practice.

François Labelle and Jonathan Richard Shewchuk,
Isosurface Stuffing: Fast Tetrahedral Meshes with Good Dihedral Angles,
ACM Transactions on Graphics 26(3):57.157.10, August 2007.
Special issue on Proceedings of SIGGRAPH 2007.
PDF (color, 3,530k, 10 pages).
The isosurface stuffing algorithm fills an isosurface with a mesh
whose dihedral angles are bounded between
10.7^{o} and 164.8^{o}.
We're pretty proud of this, because virtually nobody has been able to prove
dihedral angle bounds anywhere close to this,
except for very simple geometries.
Although the tetrahedra at the isosurface must be uniformly sized,
the tetrahedra in the interior can be graded.
The algorithm is whip fast, numerically robust, and easy to implement
because, like Marching Cubes, it generates tetrahedra from
a small set of precomputed stencils.
The angle bounds are guaranteed by a computerassisted proof.
If the isosurface is a smooth 2manifold with bounded curvature, and
the tetrahedra are sufficiently small,
then the boundary of the mesh is guaranteed to be a geometrically and
topologically accurate approximation of the isosurface.
Unfortunately, the algorithm rounds off sharp corners and edges.
(I think it will be extremely hard for anyone to devise an algorithm
that provably obtains dihedral angle bounds of this order and
conforms perfectly to creases.)

Nuttapong Chentanez, Bryan Feldman, François Labelle,
James O'Brien, and Jonathan Richard Shewchuk,
Liquid Simulation on LatticeBased Tetrahedral Meshes,
2007 Symposium on Computer Animation (San Diego, California),
pages 219228, August 2007.
PDF (color, 4,782k, 10 pages).
Here, we use isosurface stuffing (above) as part of an algorithm for
simulating liquids with free surfaces.
The graded meshes allow us to maintain fine detail on the liquid surface
without excessive computational cost in the interior.
The rendered surface is represented as a separate surface triangulation,
at an even finer detail than the surface of the tetrahedral mesh.
We exploit the regularity of the meshes for fast point location,
needed for semiLagrangian advection of the velocities and the surface itself.
We also introduce a thickening strategy to prevent the liquid from
breaking up into sheets or droplets so thin that they disappear,
falling below the finest resolution of the mesh.

Bryan Matthew Klingner and Jonathan Richard Shewchuk,
Aggressive Tetrahedral Mesh Improvement,
Proceedings of the 16th International Meshing Roundtable (Seattle, Washington),
pages 323, October 2007.
PDF (color, 26,567k, 18 pages).
Mesh cleanup software takes an existing mesh and improves the
quality of its elements by way of operations such as smoothing
(moving vertices to better locations) and topological transformations
(replacing a small set of tetrahedra with better ones).
Here, we demonstrate algorithms and software that so aggressively improve
tetrahedral meshes that we obtain quality substantially better than that
produced by any previous method for tetrahedral mesh generation or
mesh improvement.
Our main innovation is to augment the best traditional cleanup methods
(including some from the paper below) with topological transformations
and combinatorial optimization algorithms that insert new vertices.
Our software often improves a mesh so that all its dihedral angles are
between 30^{o} and 130^{o}.

Two Discrete Optimization Algorithms for the
Topological Improvement of Tetrahedral Meshes,
unpublished manuscript, 2002.
PostScript (295k, 11 pages),
PDF (168k, 11 pages).
This tutorial studies two local topological transformations for
improving tetrahedral meshes: edge removal and
multiface removal.
Given a selected edge, edge removal deletes all the tetrahedra
that contain the edge, and replaces them with other tetrahedra.
I work out in detail (and attempt to popularize)
an algorithm of Klincsek for finding the optimal set of new tetrahedra.
The multiface removal operation is the inverse of the edge removal operation.
I give a new algorithm for finding the optimal multiface removal operation
that involves a selected face of the tetrahedralization.
These algorithms are part of our tetrahedral mesh improvement software
described in the paper above.
Streaming
Computation
A STREAMING COMPUTATION MAKES A SMALL NUMBER of
sequential passes over a data file (ideally, one pass), and processes the data
using a memory buffer whose size is a fraction of the stream length.
Streaming allows us to compute Delaunay triangulations of billions of points
on an ordinary laptop computer—and amazingly,
to attain faster speeds than ordinary incore triangulators.
We also have streaming implementations of several standard GIS computations,
such as converting Triangulated Irregular Networks (TINs,
which are unstructured triangulations used to interpolate elevation fields)
into Digital Elevation Maps (DEMs), and computing isolines.
A major benefit of streaming is quick feedback.
For example, a user can pipe the triangulator's output to our
streaming isocontour extraction module,
whose output is piped to a visualization module.
Isocontours begin to appear within minutes or seconds,
because streaming modules produce output while still consuming input.
If they look wrong, the user can abort the pipeline and restart all
the streaming components with different parameters.
With other methods, users must wait hours for the computations to finish
before glimpsing the results.

Martin Isenburg, Yuanxin Liu, Jonathan Shewchuk, and Jack Snoeyink,
Streaming Computation of Delaunay Triangulations,
ACM Transactions on Graphics 25(3):10491056, July 2006.
Special issue on Proceedings of SIGGRAPH 2006.
PDF (color, 9,175k, 8 pages).
We compute a billiontriangle terrain representation for the Neuse River
system from 11.2 GB of LIDAR data in 48 minutes using only 70 MB
of memory on a laptop with two hard drives.
This is a factor of twelve faster than the previous fastest
outofcore Delaunay triangulation software.
We also construct a ninebilliontriangle, 152 GB triangulation in under
seven hours using 166 MB of main memory.
The main new idea in our streaming Delaunay triangulators is
spatial finalization.
We partition space into regions, and include finalization tags
in the stream that indicate when
no more points in the stream will fall in specified regions.
Our triangulators certify triangles or tetrahedra as Delaunay when
the finalization tags show it is safe to do so.
This make it possible to write them out early, freeing up memory
to read more from the input stream.
Because only the unfinalized parts of a triangulation are resident in memory,
the memory footprint remains small.

Video
plus leaflet:
Martin Isenburg, Yuanxin Liu, Jonathan Shewchuk, and Jack Snoeyink,
Illustrating the Streaming Construction of 2D Delaunay Triangulations,
Proceedings of the Fifteenth Video Review of Computational Geometry
(video) and
Proceedings of the TwentySecond Annual Symposium on
Computational Geometry (Sedona, Arizona), pages 481482,
Association for Computing Machinery, June 2006
(leaflet).
PDF (color, 766k, 2 pages).
The
video
(your choice of QT/mpeg4 or DivX)
demonstrates the implementation described in our SIGGRAPH paper (above),
and the leaflet describes the video.

Talk slides:
Streaming Construction of Delaunay Triangulations.
PDF (color, 2,058k, 96 pages).

Martin Isenburg, Yuanxin Liu, Jonathan Shewchuk, Jack Snoeyink, and
Tim Thirion,
Generating Raster DEM from Mass Points via TIN Streaming,
Proceedings of the Fourth International Conference on
Geographic Information Science
(GIScience 2006, Münster, Germany), September 2006.
PostScript (color, 16,554k, 13 pages).
PDF (color, 490k, 13 pages).

Martin Isenburg, Peter Lindstrom, Stefan Gumhold, and Jonathan Shewchuk,
Streaming Compression of Tetrahedral Volume Meshes,
Proceedings: Graphics Interface 2006 (Quebec City, Quebec, Canada),
pages 115121, June 2006.
PDF (color, 3,821k, 7 pages).
Finite Element
Quality
IT IS NOT EASY TO FORMULATE the problem that
a mesh generator is to solve.
The natural first question is how to characterize good and bad
triangles and tetrahedra based on their sizes and shapes.
The answer to that question depends on the application.
The universal concern is that the errors introduced by interpolation
be as small as possible.
In the finite element method, another concern (with different implications)
is that the condition numbers of the stiffness matrices be small.
Fortyodd years after the invention of the finite element method,
our understanding of the relationship between mesh geometry,
numerical accuracy, and stiffness matrix conditioning remains incomplete,
especially in anisotropic cases.
The following papers examine these issues for linear elements, and
present error bounds and quality measures to help guide
numerical analysts, researchers in triangulation and mesh generation,
and application writers in graphics and geographic information systems.

What Is a Good Linear Finite Element?
Interpolation, Conditioning, Anisotropy, and Quality Measures,
unpublished preprint, 2002.
COMMENTS NEEDED! Help me improve this manuscript.
If you read this, please send feedback.
PostScript (5,336k, 66 pages),
PDF (1,190k, 66 pages).
Why are elements with tiny angles harmless for interpolation,
but deadly for stiffness matrix conditioning?
Why are long, thin elements with angles near 180^{o} terrible in
isotropic cases but perfectly acceptable, if they're aligned properly,
for anisotropic PDEs whose solutions have anisotropic curvature?
Why do elements that are too long and thin
sometimes offer unexpectedly accurate PDE solutions?
Why can interpolation error, discretization error, and stiffness matrix
conditioning sometimes have a threeway disagreement about the aspect ratio
and alignment of the ideal element?
Why do scaleinvariant element quality measures often lead to incorrect
conclusions about how to improve a finite element mesh?
Why is the popular inradiustocircumradius ratio such
an ineffective quality measure for optimizationbased mesh smoothing?
All is revealed here.

What Is a Good Linear Element?
Interpolation, Conditioning, and Quality Measures,
Eleventh International Meshing Roundtable (Ithaca, New York),
pages 115126, Sandia National Laboratories, September 2002.
PostScript (1,083k, 12 pages),
PDF (250k, 12 pages).
A greatly abbreviated version (omitting the anisotropic cases, the derivations,
and the discussions of discretization error and timedependent problems)
of the manuscript above.

Talk slides:
What Is a Good Linear Finite Element?
Interpolation, Conditioning, Anisotropy, and Quality Measures.
PDF (color, 316k, 32 pages).
An overview of the basic bounds on interpolation errors and
maximum eigenvalues of element stiffness matrices,
as well as the quality measures associated with them.
Includes a brief discussion of how anisotropy affects these bounds
and the shape of the ideal element.
Constrained
Delaunay Triangulations
THE CONSTRAINED DELAUNAY TRIANGULATION (CDT)
is a fundamental twodimensional geometric structure with applications
in interpolation, rendering, and mesh generation.
Unfortunately, it has not hitherto been generalized to higher dimensions,
and can never be fully generalized because not every polyhedron has
a constrained tetrahedralization (allowing no additional vertices).
Here, however, I prove that there is an easily tested
condition that guarantees that a polyhedron (or piecewise linear domain)
in three or more dimensions does have a constrained Delaunay triangulation.
(A domain that satisfies the condition is said to be ridgeprotected.)
Suppose you want to tetrahedralize a threedimensional domain.
The result implies that if you insert enough extra vertices on the
boundary of a facet to recover its edges in a Delaunay tetrahedralization
(in other words, if you make it be ridgeprotected)
then you can recover the facet's interior for free—that is,
you can force the triangular faces of the tetrahedralization
to conform to the facet without inserting yet more vertices.
This method of facet recovery is immediately useful for mesh generation
or the interpolation of discontinuous functions.
(The result also fills a theoretical hole in my dissertation by
showing that it is safe to delete a vertex from
a constrained Delaunay tetrahedralization in the circumstances
where my “diametral lens” algorithm does so.)
I provide two algorithms for constructing
constrained Delaunay triangulations that are fast enough
to be useful in practice.
One is based on bistellar flips (which swap a few tetrahedra for a few others),
and one is a sweep algorithm.
The flip algorithm is easier to implement,
and is probably usually faster in practice.
However, the sweep algorithm works on almost every input that has a CDT,
whereas the flip algorithm works only on ridgeprotected inputs.
The question of which algorithm is asymptotically faster is tricky—the
answer depends on the size of the output, and is different for
a worstcase input than for a random input;
see the flip algorithm paper for details.
See the “Strange Complexity” paper to find out why
the sweep algorithm doesn't work on every input that has a CDT.

Talk slides:
Constrained Delaunay Tetrahedralizations, Bistellar Flips, and
Provably Good Boundary Recovery.
PDF (color, 233k, 49 pages).
Slides from a talk that covers the CDT existence theorem,
the vertex insertion algorithm for provably good boundary recovery,
and the flip algorithm for inserting a facet into a CDT.
You can even
watch
me give this talk at the Mathematical Sciences Research Institute.

Constrained Delaunay Tetrahedralizations and
Provably Good Boundary Recovery,
Eleventh International Meshing Roundtable (Ithaca, New York),
pages 193204, Sandia National Laboratories, September 2002.
PostScript (410k, 12 pages),
PDF (187k, 12 pages).
A basic guide: why constrained Delaunay tetrahedralizations are good for
boundary recovery, and how to use them effectively.
Includes an algorithm for inserting vertices to recover the segments
of an input domain (e.g. a polyhedron),
and highlevel descriptions of two algorithms for constructing the
constrained Delaunay tetrahedralization of the augmented domain.
(Unfortunately, this paper predates the flip algorithm,
which I think is usually a better choice in practice.)
“Provably good boundary recovery” means that no edge of the
final tetrahedralization is shorter than one quarter of the local feature size,
so any subsequent Delaunay refinement will not be forced to create
unnecessarily small elements.
(An exception is any edge that spans two segments of the input domain
separated by a small angle; a weaker bound applies to such an edge.)
Aimed at programmers and practitioners.
This paper discusses the threedimensional case only,
unlike most of the papers below.

GeneralDimensional Constrained Delaunay and
Constrained Regular Triangulations, I: Combinatorial Properties.
Discrete & Computational Geometry 39(13):580637, March 2008.
PostScript (865k, 54 pages),
PDF (447k, 54 pages).
This manuscript lays down the combinatorial foundations of CDTs and
weighted CDTs.
It begins by proving that many properties of Delaunay triangulations
(of any dimension)
generalize to constrained Delaunay triangulations—in particular,
the Delaunay Lemma, which states that a triangulation is a weighted CDT
if and only if every (d1)dimensional face is either
“locally regular” (locally convex on the lifting map) or
a constraining face.
Next, the manuscript shows that (weighted) CDTs have several optimality
properties when used for piecewise linear interpolation.
It culminates with the proof that if an input is weakly ridgeprotected
(a less restrictive condition than ridgeprotected), it has a CDT.
This proof also applies to weighted CDTs.
This paper is the ideal starting point for researchers
who want to work with CDTs of dimension higher than two,
and it is the foundation of the correctness proofs of my CDT construction
algorithms.
For those who don't want to read the proofs,
the introduction summarizes the results and how to use them.
Aimed at computational geometers.
Discusses the generaldimensional case.

A Condition Guaranteeing the Existence of
HigherDimensional Constrained Delaunay Triangulations,
Proceedings of the Fourteenth Annual Symposium on
Computational Geometry (Minneapolis, Minnesota), pages 7685,
Association for Computing Machinery, June 1998.
PostScript (328k, 10 pages),
PDF (181k, 10 pages).
An early version of the CDT existence proof,
which is the most difficult proof I've ever done.
This version of the proof is shorter than the more general and rigorous
proof in the paper above, but it also has some unnecessary complications that
I excised from the later version.
The proof is tough reading, but you don't need to understand it to use the
result.
Also includes a discussion of a slowbutsimple giftwrapping algorithm
for constructing a constrained Delaunay triangulation.
This paper does not discuss weighted CDTs (constrained regular triangulations);
see the paper above for that.
Aimed at computational geometers.
Discusses the generaldimensional case.

Updating and Constructing Constrained Delaunay and
Constrained Regular Triangulations by Flips,
Proceedings of the Nineteenth Annual Symposium on
Computational Geometry (San Diego, California), pages 181190,
Association for Computing Machinery, June 2003.
PostScript (545k, 14 pages including a fourpage appendix
not in the published version),
PDF (244k, 14 pages).
If you want to incrementally update a constrained Delaunay triangulation,
you need four operations:
inserting and deleting a facet, and inserting and deleting a vertex.
(To “insert a facet” is to force the faces of the CDT
to respect the facet;
to “delete a facet” is to relax the facet constraint so
the CDT can act a little more Delaunay and a little less constrained.)
This paper gives algorithms for the first two, based on simple bistellar flips.
A sweep algorithm for deleting a vertex appears in the paper below
(and it is trivially converted to a flip algorithm).
Finally, Barry Joe's flip algorithm for inserting a vertex
into a Delaunay triangulation is easily modified to work in a CDT.
These operations work in any dimensionality, and they can all be applied
to the more general class of
constrained regular triangulations (which include CDTs).
By starting with a Delaunay (or regular) triangulation and
incrementally inserting facets one by one,
you can construct the constrained Delaunay
(or constrained regular) triangulation of a ridgeprotected input in
O(n_{v}^{
+ 1} log n_{v}) time,
where n_{v} is the number of input vertices
and d is the dimensionality.
In odd dimensions (including three dimensions, which is what I care about most)
this is within a factor of log n_{v} of worstcase optimal.
The algorithm is likely to take only
O(n_{v} log n_{v})
time in many practical cases.
Aimed at both programmers and computational geometers.
Discusses the generaldimensional case, but most useful in three dimensions.

Sweep Algorithms for Constructing
HigherDimensional Constrained Delaunay Triangulations,
Proceedings of the Sixteenth Annual Symposium on
Computational Geometry (Hong Kong), pages 350359,
Association for Computing Machinery, June 2000.
PostScript (352k, 10 pages),
PDF (195k, 10 pages).
Gives an O(n_{v}n_{s})time sweep algorithm
for constructing a constrained Delaunay triangulation,
where n_{v} is the number of input vertices,
and n_{s} is the number of simplices in the triangulation.
(The algorithm is likely to be faster in most practical cases.)
The running time improves to
O(n_{s} log n_{v})
for starshaped polytopes, yielding an efficient way to delete a vertex
from a CDT.

Nicolas Grislain and Jonathan Richard Shewchuk,
The Strange Complexity of Constrained Delaunay Triangulation,
Proceedings of the Fifteenth Canadian Conference on Computational Geometry
(Halifax, Nova Scotia), pages 8993, August 2003.
PostScript (195k, 4 pages),
PDF (73k, 4 pages).
The problem of constructing
a constrained Delaunay tetrahedralization has the unusual status
(for a smalldimensional problem) of being NPhard only for
degenerate inputs, namely those with subsets of five or more
cospherical vertices.
This paper proves one half of that statement:
it is NPhard to decide whether a polyhedron
has a constrained Delaunay tetrahedralization.
The paper on sweep algorithms (above) contains the proof of the other half:
for a polyhedron (or more generally, a piecewise linear complex)
with no five vertices lying on a common sphere, a polynomialtime algorithm
constructs the CDT (if it exists) and thereby solves the decision problem.
Freaky, eh?

Stabbing Delaunay Tetrahedralizations,
Discrete & Computational Geometry 32(3):339343, October 2004.
PostScript (143k, 4 pages),
PDF (70k, 4 pages),
HTML.
This note answers (pessimistically) the formerly open question of
how many tetrahedra in an npoint Delaunay tetrahedralization
can be stabbed by a straight line.
The answer: for a worstcase tetrahedralization,
a line can intersect the interiors of
tetrahedra.
In d dimensions,
a line can stab the interiors of
Delaunay dsimplices.
The result explains why my sweep algorithm for constructing CDTs
has a worstcase running time of O(n_{v}n_{s}) and not
O(n_{v}^{2} +
n_{s} log n_{v}).
The difficulty of finding the worstcase example explains why
the sweep algorithm is unlikely to take longer than
O(n_{v}^{2} +
n_{s} log n_{v})
time on any realworld input.
Surface
Reconstruction
WE WISH TO RECONSTRUCT CLOSED SURFACES
from simple geometric inputs like sets of points,
sampled from the surface of a threedimensional
object using a laser range finder, or sets of polygons, which are often
used as crude geometric models for rendering.
However, the input data are rarely wellbehaved.
Laser range finders are imperfect physical devices that introduce
random errors (noise) into the point coordinates, and often introduce
points that don't lie anywhere near the surface (outliers) as well.
Moreover, range finders can't usually scan
an entire object's surface—portions
of the surface are left undersampled or unsampled.
Polygon inputs used for rendering are rarely topologically consistent—the
polygons often have spurious intersections or
leave small gaps in what should be a closed surface.
For either type of input, we wish to generate a watertight surface that
approximates the surface suggested by the data.

Ravikrishna Kolluri, Jonathan Richard Shewchuk, and James F. O'Brien,
Spectral Surface Reconstruction from Noisy Point Clouds,
Symposium on Geometry Processing 2004 (Nice, France), pages 1121,
Eurographics Association, July 2004.
PDF (color, 7,648k, 11 pages).
Researchers have put forth several provably good Delaunaybased algorithms
for surface reconstruction from unorganized point sets.
However, in the presence of undersampling, noise, and outliers,
they are neither “provably good” nor robust in practice.
Our Eigencrust algorithm uses a spectral graph partitioner
to make robust decisions about which
Delaunay tetrahedra are inside the surface and which are outside.
In practice, the Eigencrust algorithm handles undersampling, noise,
and outliers quite well, while giving essentially the same results as the
provably good Tight Cocone or Powercrust algorithms on “clean”
point sets.
(There is no theory in this paper, though.)

Talk slides:
Spectral Surface Reconstruction from Noisy Point Clouds.
PDF (color, 1,758k, 53 pages).
Slides from a talk on our paper above.

Chen Shen, James F. O'Brien, and Jonathan R. Shewchuk,
Interpolating and Approximating Implicit Surfaces from Polygon Soup,
ACM Transactions on Graphics 23(3):896904, August 2004.
Special issue on Proceedings of SIGGRAPH 2004.
PDF (color, 17,269k, 9 pages).
The Moving Least Squares (MLS) method is a popular way to define
an implicit surface that interpolates or approximates a set of points
in threedimensional space.
But graphics programmers have made millions of polygonalized surface
models; what if we want to interpolate whole polygons?
Approximating a polygon as a bunch of points gives poor results.
Instead, we show how to force an MLS function to have a specified value over
each input polygon, by integrating constraints over triangles.
Better yet, we show how to force the MLS function to have a specified gradient
over each polygon as well,
so that we can robustly specify which parts of space should be
inside or outside the implicit surface—without creating
undue oscillations in the MLS function.
The trick is to define a different function on each input polygon, and
use MLS to interpolate between functions—not
just to interpolate between values
(as the usual formulations of MLS for implicit surfaces do).
This trick gives us profound control of an MLS function.
Although our examples are all surfaces embedded in threedimensional space,
the techniques generalize to any dimensionality.
Geometric
Robustness
GEOMETRIC PROGRAMS ARE SURPRISINGLY SUSCEPTIBLE
to failure because of floatingpoint roundoff error.
Robustness problems can be solved by using exact arithmetic,
at the cost of reducing program speed by a factor of ten or more.
Here, I describe a strategy for computing correct answers quickly
when the inputs are floatingpoint values.
(Much other research has dealt with the problem for integer inputs, which
are less convenient for users but more tractable for robustness researchers.)
To make robust geometric tests fast, I propose two new techniques
(which can also be applied to other problems of numerical accuracy).
First, I develop and prove the correctness of softwarelevel algorithms
for arbitrary precision floatingpoint arithmetic.
These algorithms are refinements (especially with regard to speed)
of algorithms suggested by Douglas Priest,
and are roughly five times faster than the best available competing method
when values of small or intermediate precision (hundreds or thousands of bits)
are used.
Second, I show how simple expressions (whose only operations are
addition, subtraction, and multiplication) can be computed adaptively,
trading off accuracy and speed as necessary
to satisfy an error bound as quickly as possible.
(This technique is probably applicable to any exact arithmetic scheme.)
I apply these ideas to build fast, correct
orientation and incircle tests in two and three dimensions,
and to make robust the implementations of
two and threedimensional Delaunay triangulation in Triangle and Pyramid.
Detailed measurements show that in most circumstances,
these programs run nearly as quickly when using my adaptive predicates
as they do using nonrobust predicates.
See my
Robust Predicates page for more information
about this research, or to obtain C source code for
exact floatingpoint addition and multiplication
and the robust geometric predicates.

Adaptive Precision FloatingPoint
Arithmetic and Fast Robust Geometric Predicates,
Discrete & Computational Geometry 18(3):305363, October 1997.
PostScript (775k, 55 pages),
PDF (556k, 55 pages).
Also appears as Chapter 6 of my dissertation.

Robust Adaptive FloatingPoint Geometric
Predicates, Proceedings of the Twelfth Annual Symposium on
Computational Geometry (Philadelphia, Pennsylvania), pages 141150,
Association for Computing Machinery, May 1996.
Abstract (with BibTeX citation),
PostScript (310k, 10 pages),
PDF (174k, 10 pages).
A very abbreviated summary of the ideas from the fulllength paper
above.
The Quake
Project
PAPERS ABOUT
THE QUAKE PROJECT,
a multidisciplinary Grand Challenge Application
Group studying ground motion in large basins during strong earthquakes,
with the goal of characterizing the seismic response of the Los Angeles basin.
The Quake Project is a joint effort between the departments of Computer Science
and Civil and Environmental Engineering at Carnegie Mellon, the
Southern California Earthquake Center, and the National University of Mexico.
We've created some of the largest unstructured finite element simulations
ever carried out; in particular,
the papers below describe a simulation of ground motion during an
aftershock of the 1994 Northridge Earthquake.

Hesheng Bao, Jacobo Bielak, Omar Ghattas, Loukas F. Kallivokas,
David R. O'Hallaron, Jonathan R. Shewchuk, and Jifeng Xu,
Largescale Simulation of Elastic Wave Propagation in Heterogeneous Media
on Parallel Computers,
Computer Methods in Applied Mechanics and Engineering 152(12):85102,
22 January 1998.
Abstract (with BibTex citation),
Compressed PostScript (color, 4,262k, 34 pages)
(Expands to 41,267k when uncompressed),
PDF (color, 6,681k, 34 pages),
HTML.
The PostScript version uncompresses into six PostScript files.
The first and last of these files are blackandwhite.
The middle four are huge and in full color.

Hesheng Bao, Jacobo Bielak, Omar Ghattas, David R. O'Hallaron,
Loukas F. Kallivokas, Jonathan R. Shewchuk, and Jifeng Xu,
Earthquake Ground Motion Modeling on
Parallel Computers, Supercomputing '96 (Pittsburgh, Pennsylvania),
November 1996.
Abstract (with BibTeX citation),
PostScript (color, 9,370k, 19 pages),
PDF (color, 2,436k, 19 pages),
HTML.
An abbreviated version of the fulllength paper above.
OUR SECRET TO PRODUCING such huge unstructured simulations?
With the collaboration of David O'Hallaron, I've written
Archimedes,
a chain of tools for automating the construction of generalpurpose
finite element simulations on parallel computers.
In addition to the mesh generators Triangle and Pyramid discussed above,
Archimedes includes Slice,
a mesh partitioner based on geometric recursive bisection;
Parcel, which performs the surprisingly jumbled task of
computing communication schedules and
reordering partitioned mesh data into a format a parallel simulation can use;
and Author, which generates parallel C code from highlevel
machineindependent programs
(which are currently written by the civil engineers in our group).
Archimedes has made it possible for the Quake Project
to weather four consecutive changes in parallel architecture
without missing a beat.
The most recent information about Archimedes is contained in the Quake papers
listed above.
See also the
Archimedes page.

Anja Feldmann, Omar Ghattas, John R. Gilbert, Gary L. Miller,
David R. O'Hallaron, Eric J. Schwabe, Jonathan R. Shewchuk, and ShangHua Teng,
Automated Parallel Solution of Unstructured PDE Problems,
unpublished manuscript, June 1996.
PostScript (b/w, 1,708k, 19 pages),
PostScript (color, 1,845k, 19 pages).
Because color printing is expensive, you may want to print a complete
black and white copy; then use a color printer to print the following file
(which contains only the five color pages) and
replace the corresponding black and white pages.
PostScript (color pages only, 818k, 5 pages).
A simple overview aimed at people who have little familiarity
with finite element methods or parallel scientific computing.

Jonathan Richard Shewchuk and Omar Ghattas,
A Compiler for Parallel Finite Element Methods with
DomainDecomposed Unstructured Meshes,
Proceedings of the Seventh International Conference on Domain
Decomposition Methods in Scientific and Engineering Computing
(Pennsylvania State University), Contemporary Mathematics 180
(David E. Keyes and Jinchao Xu, editors), pages 445450,
American Mathematical Society, October 1993.
Abstract,
PostScript (color, 1,203k, 6 pages),
PDF (color, 319k, 6 pages).
A short discussion of our unusual way of storing distributed stiffness matrices
and how it makes some domain decomposition algorithms easy to parallelize.

Eric J. Schwabe, Guy E. Blelloch, Anja Feldmann, Omar Ghattas,
John R. Gilbert, Gary L. Miller, David R. O'Hallaron,
Jonathan R. Shewchuk, and ShangHua Teng,
A SeparatorBased Framework for Automated Partitioning and Mapping of
Parallel Algorithms for Numerical Solution of PDEs,
Proceedings of the 1992 DAGS/PC Symposium,
Dartmouth Institute for Advanced Graduate Studies, pages 4862, June 1992.
Abstract,
PostScript (2,247k, 15 pages),
PDF (464k, 15 pages).
This paper is made obsolete by the manuscript
“Automated Parallel Solution of Unstructured PDE Problems” above.
SEVERAL PAPERS ON THE COMPUTATION AND COMMUNICATION DEMANDS
of the Quake Project's parallel finite element simulations,
and what requirements such unstructured simulations will
place on future parallel machines and networks.
For designers of computer architecture who want to better understand
unstructured applications.

David O'Hallaron, Jonathan Richard Shewchuk, and Thomas Gross,
Architectural Implications of a Family of Irregular Applications,
Fourth International Symposium on High Performance Computer Architecture
(Las Vegas, Nevada), February 1998.
Abstract (with BibTex citation),
PostScript (1,289k, 20 pages),
PDF (218k, 20 pages).

David R. O'Hallaron and Jonathan Richard Shewchuk,
Properties of a Family of Parallel Finite Element Simulations,
Technical Report CMUCS96141, School of Computer Science,
Carnegie Mellon University, Pittsburgh, Pennsylvania, December 1996.
Abstract (with BibTex citation),
PostScript (987k, 22 pages).
Route
Planning
HERE, WE APPLY CONFORMING DELAUNAY TRIANGULATIONS
to the AI problem of route planning on realworld maps.
The problem is inherently “dirty,” with difficulties ranging
from oneway streets to incorrect maps,
so it is not straightforward even to formally specify a problem to solve.
Our approach is to use an analogical planner,
which uses past experiences to help choose the best result for future travel.
However, this casebased reasoning approach to planning requires
a similarity metric to decide which previous cases are
most appropriate for the current problem.
Our similarity metric begins by describing a geometric problem
that roughly approximates the problem of discovering good previous cases,
then we solve the geometric problem with
a combination of geometric theory and heuristics.
The solution of the geometric problem is then cast
into an approximate solution to the planning problem,
and the rough edges are smoothed by bruteforce symbolic planning.
This procedure proves to be faster than
bruteforce symbolic planning from scratch.

Karen Zita Haigh, Jonathan Richard Shewchuk, and Manuela M. Veloso,
Exploiting Domain Geometry in Analogical Route Planning,
Journal of Experimental and Theoretical Artificial Intelligence
9(4):509541, October 1997.
Abstract,
PostScript (1,419k, 30 pages),
PDF (688k, 30 pages).

Karen Zita Haigh, Jonathan Richard Shewchuk, and Manuela M. Veloso,
Route Planning and Learning from Execution,
Working notes from the AAAI Fall Symposium
“Planning and Learning: On to Real Applications”
(New Orleans, Louisiana), pages 5864, AAAI Press, November 1994.
Abstract,
PostScript (163k, 7 pages),
PDF (106k, 7 pages).
Superseded by the first paper in this list.

Karen Zita Haigh and Jonathan Richard Shewchuk,
Geometric Similarity Metrics for CaseBased Reasoning,
CaseBased Reasoning: Working Notes from the AAAI94 Workshop
(Seattle, Washington), pages 182187, AAAI Press, August 1994.
Abstract,
PostScript (156k, 6 pages).
Superseded by the first paper in this list.