Brian Curless is an assistant professor of Computer Science and Engineering at the University of Washington. He received a B.S. in Electrical Engineering from the University of Texas at Austin in 1988 and M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 1991 and 1997, respectively. After the B.S. degree, Curless developed and implemented high speed, parallel, digital signal processing algorithms at SRI International. While earning the Ph.D., he consulted for Silicon Graphics and built the prototype for SGI's Annotator product, a system for hyper-media annotation of 3D databases. Curless's recent research has focused on acquiring and building complex geometric models using structured light scanning systems. In the vision literature, he has published results on fundamentally better methods for optical triangulation, and at SIGGRAPH, he published a new method for combining range images that led to the first "3D fax" of a geometrically complex object. Curless currently sits on the Technical Advisory Board for Paraform, Inc., a company that is commercializing Stanford-developed technology for building CAD-ready models from range data and polygonal meshes. In the winter of 1999, Curless will work with Marc Levoy on the Digital Michelangelo Project in Florence where they will capture the geometry and appearance of Michelangelo's statues. His teaching experience includes both graduate and undergraduate graphics courses, including courses related to 3D photography taught at both Stanford and the University of Washington. Curless received a university-wide Outstanding Teaching Award from Stanford University in 1992.
Steven Seitz (co-organizer)
Assistant Professor
The Robotics Institute
Carnegie Mellon University
5000 Forbes Ave.
Pittsburgh, PA 15213
Tel: (412) 268-6795
Fax: (412) 268-5669
Email: seitz@cs.cmu.edu
Web: http://www.cs.cmu.edu/~seitz
Steven Seitz is an Assistant Professor of Robotics and Computer Science at Carnegie Mellon University, where he conducts research in image-based rendering, graphics, and computer vision. Before joining the Robotics Institute in August 1998, he spent a year visiting the Vision Technology Group at Microsoft Research, and a previous summer in the Advanced Technology Group at Apple Computer. His current research focuses on the problem of acquiring and manipulating visual representations of real environments using semi- and fully-automated techniques. This effort has led to the development of "View Morphing" techniques for interpolating different images of a scene and voxel-based algorithms for computing photorealistic scene reconstructions. His work in these areas has appeared at SIGGRAPH and in international computer vision conferences and journals. He received his B.A. in computer science and mathematics at the University of California, Berkeley in 1991 and his Ph.D. in computer sciences at the University of Wisconsin, Madison in 1997.
Jean-Yves Bouguet
California Intitute of Technology - MS 136-93
1200 East California Blvd
Pasadena, CA 91125
Tel: (626) 395 3272
Fax: (626) 795 8649
Email: bouguetj@vision.caltech.edu
Web: http://www.vision.caltech.edu/bouguetj
Jean-Yves Bouguet received his diplome d'ingenieur from the Ecole Superieure d'ingenieurs en Electrotechnique et Electronique (ESIEE) in 1994 and the MS degree in Electrical Engineering from the California Institute of Technology (Caltech) in 1994. He is now completing his Ph.D. in Electrical Engineering at Caltech in the computational vision group under the supervision of Pietro Perona. His research interests cover passive and active techniques for three-dimensional scene modeling. He has developed a simple and inexpensive method for scanning objects using shadows. This work was first presented at ICCV'98 and a patent is pending on that invention. He also collaborated with Jim Arvo, Peter Schrder and Pietro Perona in teaching a class on 3D photography from 1996 to 1998 at Caltech. Jean-Yves is currently working in collaboration with Larry Matthies at JPL on the development of passive visual techniques for three dimensional autonomous navigation targeted towards comet modeling and landing.
Paul Debevec
Research Scientist
University of California at Berkeley
387 Soda Hall #1776
Computer Science Division, UC Berkeley
Berkeley, CA 94720-1776
Tel: (510) 642-9940
Fax: (510) 642-5775
Email: debevec@cs.berkeley.edu
Web: http://www.cs.berkeley.edu/~debevec
Paul Debevec earned degrees in Math and Computer Engineering at the University of Michigan in 1992 and completed his Ph.D. at the University of California at Berkeley in 1996, where he is now a research scientist. Debevec first studied 3D Photography in 1989 during a computer vision course taught by Ramesh Jain. In 1991, Debevec used 3D photography to model a 1980 Chevette automobile from a small set of photographs. At Berkeley, Debevec collaborated on several creative projects with Interval Research Corporation that employed a variety of 3D photography techniques, including Michael Naimark's "Immersion" project shown at SIGGRAPH 95 and the art installation "Rouen Revisited" at the SIGGRAPH 96 art show. Debevec's Ph.D. thesis with Jitendra Malik and C. J. Taylor presented an interactive method of modeling architectural scenes from sparse sets of photographs and for rendering these scenes realistically. Debevec has co-authored papers in computer vision and computer graphics, spoken at a variety of venues on topics relating to 3D photography, and co-organized the SIGGRAPH 98 course "Image-Based Modeling and Rendering" with Steven Gortler. In 1997, Debevec led the effort to produce an image-based model of the UC Berkeley campus for "The Campanile Movie", a short film shown at the SIGGRAPH 97 Electronic Theater. The following year, he presented the film "Rendering with Natural Light" to demonstrate novel image-based lighting techniques. With interests in art and cinema, Debevec enjoys investigating techniques that are useful for creative applications.
Marc Levoy
Associate Professor
Stanford University
Gates Computer Science Building
Room 366, Wing 3B
Stanford University
Stanford, CA 94305
Tel: (650) 725-4089
Fax: (650) 723-0033
Email: levoy@cs.stanford.edu
Web: http://graphics.stanford.edu/~levoy
Marc Levoy is an associate professor of Computer Science and Electrical Engineering at Stanford University. He received a B. Architecture in 1976 from Cornell University, an M.S. in 1978 from Cornell University, and a Ph.D. in Computer Science in 1989 from the University of North Carolina at Chapel Hill. Levoy's early research centered on computer-assisted cartoon animation, leading to development of a computer animation system for Hanna-Barbera Productions. His recent publications are in the areas of volume visualization, rendering algorithms, computer vision, geometric modeling, and user interfaces for imaging and visualization. His current research interests include digitizing the shape and appearance of physical objects using multiple sensing technologies, the creation, representation, and rendering of complex geometric models, image-based modeling and rendering, and applications of computer graphics in art history, preservation, restoration, and archeology. Levoy received the NSF Presidential Young Investigator Award in 1991 and the SIGGRAPH Computer Graphics Achievement Award in 1996 for his work in volume rendering.
A. 8:30 - 8:50, 20 min
Introduction (Curless)
1. Overview of area and the course
2. Acquiring 3D models from images
3. Applications to computer graphics
B. 8:50 - 9:35, 45 min
Acquiring images (Curless and Seitz)
1. Image formation
- The lens law
- Aberrations
2. Media and Sensors
- Film
- CCD's
3. Cameras as radiometric tools
4. Camera calibration
C. 9:35 - 10:15, 40 min
Overview of passive vision techniques (Seitz)
1. Cues for 3D inference (parallax, shading, focus, texture)
2. Reconstruction techniques
- Stereo
- Structure from motion
- Shape from shading, photometric stereo
- Shape from focus
- Other approaches
3. Strengths and Limitations
<> 10:15 - 10:30 Break
D. 10:30 - 11:20, 50 min
Façade: modeling architectural scenes (Debevec)
1. Constrained structure recovery
- Architectural primitives
2. Photogrammetry
- Recovering camera parameters
- Importance of user-interaction
3. Model-based stereo
4. Connections to image-based rendering
- Impact of geometric accuracy on rendering quality
- Local vs. global 3D models
E. 11:20 - 12:00, 40 min
Voxel-based techniques for reconstruction (Seitz)
1. Image-space vs. scene-space techniques
2. Volume intersection
- Shape from silhouettes
3. Voxel coloring
- Modeling radiance
- Plane-sweep visibility
4. Space carving
- General visibility modeling
- Ambiguities in scene reconstruction
<> 12:00 - 1:30 Lunch
F. 1:30 - 2:15, 45 min
Overview of active vision techniques (Curless)
1. Imaging radar
- Time of flight
- Amplititude modulation
2. Optical triangulation
- Scanning with points and stripes
- Spacetime analysis
3. Interferometry
- Moire
4. Structured light applied to passive vision
- Stereo
- Depth from defocus
5. Reflectance capture
- From shape-directed lighting
- Using additional lighting
G. 2:15 - 2:55, 40 min
Desktop 3D photography (Bouguet)
1. Traditional scanning is expensive, but...
desklamp + pencil = structured light
2. The shadow scanning technique
- Indoor: on the desktop
- Outdoor: the sun as structured light
3. Calibration issues
4. Temporal analysis for improved accuracy
5. Error Analysis
H. 2:55 - 3:35, 40 min
Shape and appearance from images and range data (Curless)
1. Registration
2. Reconstruction from point clouds
3. Reconstruction from range images
- Zippering
- Volumetric merging
4. Modeling appearance
<> 3:35 - 3:50 Break
I. 3:50 - 4:40, 50 min
Application: The Digital Michelangelo Project (Levoy)
1. Goals
- Capturing the shape and apperance of:
- Michelangelo's sculptures
- Renaissance architecture
2. Motivation
- Scholarly inquiry
- Preservation through digital archiving
- Virtual museums
- High fidelity reproductions
3. Design requirements
- Geometry: from chisel marks to building facades
- Appearance: reflectance of wood, stone, marble
4. Custom scanning hardware
5. Capturing appearance with high resolution photographs
J. 4:40 - 5:00, 20 min
Discussion: 3D cameras and the future of photography (Everyone)
1. What are the killer apps for 3D photography?
2. When are passive vs. active techniques appropriate?
3. How will consumer-grade technology influence 3D photography?
4. Will 3D photography itself become a consumer product?
<> Adjourn
This course presents the current state of the art in 3D photography and describes the principles behind a number of current techniques. We will introduce the fundamental concepts, provide a survey of a variety of techniques, and then examine in detail a few successful approaches at the forefront of 3D photography, presented by leading researchers in the field. In particular, the course will examine optical methods, including stereo vision, photogrammetry, structured light, and laser range scanners. The course will provide a forum for presenting a range of different techniques and discussing the relative merits and weaknesses of current approaches.
The second course proposal, called "Practical Generation of Models from Acquired Data," organized by Ken Martin, focuses on a variety of algorithms for reconstruction from volumetric and range data as well as methods for processing the resulting polygonal model. Whereas our course emphasizes the acquisition, this course emphasizes the reconstruction steps. One of the co-organizers for 3D photography course (Brian Curless) has agreed to be a speaker for Ken Martin's course.
Due to their complementary nature, in the event that our course and one or both of these other courses are accepted, we would recommend that our course not take place on the same day. A logical choice would be to have ours preceed the others.
In addition, when reviewing the course, it might be helpful to examine
some notes from a course co-taught by Brian Curless and Marc Levoy at
Stanford University. These notes may be found at:
http://www-graphics.stanford.edu/courses/cs348c-97-winter/index.html