(bio) I am a Research Assistant Professor at TTI-C. I maintain sparse-meshing.com. In December 2007 I completed my Ph.D. at CMU in the computer science department, working with Gary Miller on computational geometry and scientific computing. My first year and a half there I was working with Tuomas Sandholm on auction design.
Prior to this I worked at NASA Ames Research Center in the model-based autonomy group, most notably working on the L2 diagnosis engine, now in orbit aboard the EO-1 spacecraft; and on hcc, a hybrid dynamics simulator now busily bitrotting. Prior to that, I did my undergrad at Brown.
This is a two-page abstract that shows how to cook up pieces from the Size Complexity and Efficient Query papers below, together with two observations (one by Timothy Chan, the other I forget) made last FWCG, and have a nice snack that is halfway to a linear-time surface reconstruction algorithm: I can compute a superset of the restricted Voronoi diagram in linear time. Filtering out the excess remains future work. Though that should be linear time too, I'll need a coauthor who understands the filtering process better than I do to actually get a proof.
A generalization of Steiner point choice for incremental Delaunay mesh refinement algorithms. I prove that any algorithm that chooses points according to my rules will terminate with a mesh that is not too large. The rules encompass the techniques that most Delaunay refinement algorithms use to mesh point cloud, piecewise linear, or piecewise smooth complexes in arbitrary (fixed) dimension; the user can also give explicit direction to refine certain areas of interest. I give examples to show how to apply my result to prove in a paragraph what has traditionally taken two pages to show.
Anecdotally, if we have a surface mesh and from it generate a graded volume mesh, we only need to add a small number of vertices (maybe a factor of two or ten). Indeed, intuition would say that the grading condition means that every ring of vertices around the surface has half as many vertices as the previous ring, so the total number should be linear. But there are many pathological cases where this does not hold, so giving a proper proof is non-trivial. We prove that under a quite reasonable condition, the volume mesh has only linearly more vertices than the surface mesh.
The main efficiency question in mesh refinement is how to update the mesh after each insertion. SVR has a top-down refinement strategy that guarantees that updates are always cheap. Har-Peled and Ungor use a bottom-up strategy and build a point location structure on the side. Their result was only in two dimensions; we extend it to any fixed dimension. We also clarify (and, I claim, simplify) the interface between the mesher and the point location structure.
Includes results from the SVR implementation and generalizes and improves the dynamic algorithm. It also has a broader categorization of how one may choose Steiner points, not yet otherwise published. See also my defense talk (Keynote pdf).
An implementation of SVR. For point clouds, SVR is the fastest meshing code available. The code is downloadable cost-free for researchers at sparse-meshing.com. See also the talk (tarred Keynote file).
We show how to produce a balanced quad-tree over a point set in any dimension, and then to update the quad-tree as points are adversarially added or removed from the point set. Our solution is the first to provably be able to maintain an optimal-size quality mesh in the optimal O(lg L/s) timebound. In two dimensions, the mesh we maintain is as small as is known how to produce in practice: we dynamize a quad-tree post-processing algorithm of Har-Peled and Ungor (and we conjecture the same holds in three and higher dimensions). Furthermore, the algorithm is easy to implement because we use self-adjusting computation to automatically dynamize it: just the static algorithm plus a few annotations suffice to get working dynamic code.
See also an earlier experimental result: Optimal-time dynamic mesh refinement: preliminary results. Umut A. Acar, Benoît Hudson. Fall workshop on Computational Geometry, 2006. (pdf bib)
SVR, in parallel. We get essentially O(log^{2} m) parallel depth and optimal O(n log n + m) work bounds for meshing PLC input in any fixed dimension. We produce good aspect-ratio meshes rather than merely good radius-edge meshes by using the Li-Teng technique, which we show how to parallelize.
Mesh refinement in any fixed dimension, producing a quality radius-edge mesh that is within a constant factor of optimal in size, and does so in optimal time on a broad class of inputs. In two dimensions, we match two prior results. In three and higher dimensions, we have the first subquadratic runtimes.See also the technical report CMU-CS-06-132 for a longer version with full algorithms and proofs.
We show how to do rotations in Kirkpatrick's 1982 planar point location structure. In more recent work (not yet published) we extend this to Delaunay triangulations in arbitrary dimension.
Given n agents bidding on k items, where the agents have complicated preferences over the items, how can the auctioneer get all the information it needs to clear the auction, without requiring a full set of prices from each agent? This is an experimental study of a few policies.Presented at:
Lead article of April 2001 Dr.Dobb's Journal. Describes a library of fundamental data structures, among the first implemented for Java. Largely obsoleted by the Java Collections framework, but still in use in the popular textbook by Goodrich and Tamassia.
A report stemming from my work with Preparata and Upfal on testing their algorithms for fast shotgun sequencing of short genomes such as H. influenza. The theoretical results were described in Preparata, Frieze, and Upfal 1999
Other interests, in no particular order:
Rock climbing, cross-country skiing, backpacking and hiking, swing dance,
civ2 and lookalikes. When I have time I try to read Science and NYRB.