3D Photography

A SIGGRAPH 2000 Course Proposal

Brian Curless
University of Washington

Steven Seitz
Carnegie Mellon University


Contents

Presenter Information
Course Syllabus
Course History
Summary Statement
Expanded Statement
Course Prerequisites
Topics Beyond Prerequisites
Course Notes Description
Special Requirements
Additional Note to Reviewers


Presenter Information

Brian Curless (co-organizer)
Assistant Professor
Dept. of Computer Science & Engineering
University of Washington
Sieg Hall, Box 352350
Seattle, WA 98195-2350
Tel: (206) 685-3796
Fax: (206) 543-2969
Email: curless@cs.washington.edu
Web:
http://www.cs.washington.edu/homes/curless

Brian Curless is an assistant professor of Computer Science and Engineering at the University of Washington. He received a B.S. in Electrical Engineering from the University of Texas at Austin in 1988 and M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 1991 and 1997, respectively. After the B.S. degree, Curless developed and implemented high speed, parallel, digital signal processing algorithms at SRI International. While earning the Ph.D., he consulted for Silicon Graphics and built the prototype for SGI's Annotator product, a system for hyper-media annotation of 3D databases. Curless's recent research has focused on acquiring and building complex geometric models using structured light scanning systems. In the vision literature, he has published results on fundamentally better methods for optical triangulation, and at SIGGRAPH, he published a new method for combining range images that led to the first "3D fax" of a geometrically complex object. Curless currently sits on the Technical Advisory Board for Paraform, Inc., a company that is commercializing Stanford-developed technology for building CAD-ready models from range data and polygonal meshes. In the winter of 1999, Curless will work with Marc Levoy on the Digital Michelangelo Project in Florence where they will capture the geometry and appearance of Michelangelo's statues. His teaching experience includes both graduate and undergraduate graphics courses, including courses related to 3D photography taught at both Stanford and the University of Washington. Curless received a university-wide Outstanding Teaching Award from Stanford University in 1992.

Steven Seitz (co-organizer)
Assistant Professor
The Robotics Institute, Smith Hall
Carnegie Mellon University
5000 Forbes Ave.
Pittsburgh, PA 15213
Tel: (412) 268-6795
Fax: (412) 268-5669
Email: seitz@cs.cmu.edu
Web: http://www.cs.cmu.edu/~seitz

Steven Seitz is an Assistant Professor of Robotics and Computer Science at Carnegie Mellon University, where he conducts research in image-based rendering, graphics, and computer vision. Before joining the Robotics Institute in August 1998, he spent a year visiting the Vision Technology Group at Microsoft Research, and a previous summer in the Advanced Technology Group at Apple Computer. He received his B.A. in computer science and mathematics at the University of California, Berkeley in 1991 and his Ph.D. in computer sciences at the University of Wisconsin, Madison in 1997. His current research focuses on the problem of acquiring and manipulating visual representations of real environments using semi- and fully-automated techniques. This effort has led to the development of "View Morphing" techniques for interpolating different images of a scene and voxel-based algorithms for computing photorealistic scene reconstructions. His work in these areas has appeared at SIGGRAPH and in international computer vision conferences and journals and he co-organized the 1999 SIGGRAPH course on 3D Photography. Seitz is a co-winner of the 1999 David Marr Prize in Computer Vision.

Jean-Yves Bouguet
Intel Corporation - SC12-303
Microprocessor Research Labs
2200 Mission College Blvd.
Santa Clara, CA 95052
Email: jean-yves.bouguet@intel.com
Tel: (408) 765 3891
Web: http://www.vision.caltech.edu/bouguetj/

Jean-Yves Bouguet received his diplome d'ingenieur from the Ecole Superieure d'ingenieurs en Electrotechnique et Electronique (ESIEE) in 1994 and the MS degree in Electrical Engineering from the California Institute of Technology (Caltech) in 1994. He is now completing his Ph.D. in Electrical Engineering at Caltech in the computational vision group under the supervision of Pietro Perona. His research interests cover passive and active techniques for three-dimensional scene modeling. He has developed a simple and inexpensive method for scanning objects using shadows. This work was first presented at ICCV'98 and a patent is pending on that invention. He also collaborated with Jim Arvo, Peter Schrder and Pietro Perona in teaching a class on 3D photography from 1996 to 1998 at Caltech. Jean-Yves is currently working in collaboration with Larry Matthies at JPL on the development of passive visual techniques for three dimensional autonomous navigation targeted towards comet modeling and landing.

Paul Debevec
Research Scientist
University of California at Berkeley
387 Soda Hall #1776
Computer Science Division, UC Berkeley
Berkeley, CA 94720-1776
Tel: (510) 642-9940
Fax: (510) 642-5775
Email: debevec@cs.berkeley.edu
Web: http://www.cs.berkeley.edu/~debevec

Paul Debevec earned degrees in Math and Computer Engineering at the University of Michigan in 1992 and completed his Ph.D. at the University of California at Berkeley in 1996, where he is now a research scientist. Debevec first studied 3D Photography in 1989 during a computer vision course taught by Ramesh Jain. In 1991, Debevec used 3D photography to model a 1980 Chevette automobile from a small set of photographs. At Berkeley, Debevec collaborated on several creative projects with Interval Research Corporation that employed a variety of 3D photography techniques, including Michael Naimark's "Immersion" project shown at SIGGRAPH 95 and the art installation "Rouen Revisited" at the SIGGRAPH 96 art show. Debevec's Ph.D. thesis with Jitendra Malik and C. J. Taylor presented an interactive method of modeling architectural scenes from sparse sets of photographs and for rendering these scenes realistically. Debevec has co-authored papers in computer vision and computer graphics, spoken at a variety of venues on topics relating to 3D photography, and co-organized the SIGGRAPH 98 course "Image-Based Modeling and Rendering" with Steven Gortler. In 1997, Debevec led the effort to produce an image-based model of the UC Berkeley campus for "The Campanile Movie", a short film shown at the SIGGRAPH 97 Electronic Theater. The following year, he presented the film "Rendering with Natural Light" to demonstrate novel image-based lighting techniques. With interests in art and cinema, Debevec enjoys investigating techniques that are useful for creative applications.

Marc Levoy
Associate Professor
Stanford University
Gates Computer Science Building
Room 366, Wing 3B
Stanford University
Stanford, CA 94305
Tel: (650) 725-4089
Fax: (650) 723-0033
Email: levoy@cs.stanford.edu
Web: http://graphics.stanford.edu/~levoy

Marc Levoy is an associate professor of Computer Science and Electrical Engineering at Stanford University. He received a B. Architecture in 1976 from Cornell University, an M.S. in 1978 from Cornell University, and a Ph.D. in Computer Science in 1989 from the University of North Carolina at Chapel Hill. Levoy's early research centered on computer-assisted cartoon animation, leading to development of a computer animation system for Hanna-Barbera Productions. His recent publications are in the areas of volume visualization, rendering algorithms, computer vision, geometric modeling, and user interfaces for imaging and visualization. His current research interests include digitizing the shape and appearance of physical objects using multiple sensing technologies, the creation, representation, and rendering of complex geometric models, image-based modeling and rendering, and applications of computer graphics in art history, preservation, restoration, and archeology. Levoy received the NSF Presidential Young Investigator Award in 1991 and the SIGGRAPH Computer Graphics Achievement Award in 1996 for his work in volume rendering.

Shree K. Nayar
Professor
Department of Computer Science
Columbia University
500 West, 120 Street
New York, NY 10027
Tel: (212) 939-7092
Fax: (212) 939-7172
Email: nayar@cs.columabia.edu
Web: http://www.cs.columbia.edu/~nayar/

Shree K. Nayar is a Professor at the Department of Computer Science, Columbia University. He received his PhD degree in Electrical and Computer Engineering from the Robotics Institute at Carnegie Mellon University in 1990. His primary research interests are in computational vision and robotics with emphasis on physical models for early visual processing, sensors and algorithms for shape recovery, learning and recognition of visual patterns, and vision for graphics.  Dr. Nayar has authored and coauthored papers that have received the David Marr Prize at the 1995 International Conference on Computer Vision (ICCV'95) held in Boston, Siemens Outstanding Paper Award at the 1994 IEEE Computer Vision and Pattern Recognition Conference (CVPR'94) held in Seattle, 1994 Annual Pattern Recognition Award from the Pattern Recognition Society, Best Industry Related Paper Award at the 1994 International Conference on Pattern Recognition (ICPR'94) held in Jerusalem, and the David Marr Prize at the 1990 International Conference on Computer Vision (ICCV'90) held in Osaka. He holds several U.S. and international patents for inventions related to computer vision and robotics. Dr. Nayar was the recipient of the David and Lucile Packard Fellowship for Science and Engineering in 1992 and the National Young Investigator Award from the National Science Foundation in 1993.

Course Syllabus


A. 8:30 - 8:50, 20 min  

  Introduction (Curless)

    1. Overview of area and the course
    2. Speaker introductions
    3. Applications to computer graphics


B. 8:50 - 9:35, 45 min

  Sensing for vision and graphics (Nayar)
    
    1. The dimensions of visual sensing
    2. Catadioptric vision
    3. Panoramic and omnidirectional cameras
    4. Spherical mosiacs
    5. Single camera stereo
    6. Radiometric self calibration
    7. High dynamic range imaging
    8. Vision and the atmosphere
    9. Structure from bad weather


C. 9:35 - 10:15, 45 min  

  Overview of passive vision techniques (Seitz)

    1. Cues for 3D inference (parallax, shading, focus, texture)
    2. Camera Calibration
    3. Single view techniques
    4. Multiple view techniques
       - Stereo
       - Structure from motion
       - Photometric stereo
    5. Strengths and Limitations

<> 10:15 - 10:30 Break


D. 10:30 - 11:20, 45 min  

  Façade: modeling architectural scenes (Debevec)

    1. Constrained structure recovery
       - Architectural primitives
    2. Photogrammetry
       - Recovering camera parameters
       - Importance of user-interaction 
    3. Model-based stereo
    4. Connections to image-based rendering
       - Impact of geometric accuracy on rendering quality
       - Local vs. global 3D models


E. 11:20 - 12:00, 40 min  

  Voxels from images (Seitz)

    1. Voxel-based scene representation
    2. Volume intersection
       - Shape from silhouettes
    3. Voxel coloring
       - Modeling radiance
       - Plane-sweep visibility
    4. Space carving
       - General visibility modeling
       - Ambiguities in scene reconstruction
    5. Related Techniques


<> 12:00 - 1:30 Lunch 


F. 1:30 - 2:15, 45 min  

  Overview of active vision techniques (Curless)

    1. Imaging radar
       - Time of flight
       - Amplititude modulation
    2. Optical triangulation
       - Scanning with points and stripes
       - Spacetime analysis
    3. Interferometry
       - Moire
    4. Structured light applied to passive vision
       - Stereo
       - Depth from defocus
    5. Reflectance capture
       - From shape-directed lighting
       - Using additional lighting


G. 2:15 - 2:55, 40 min  

  Desktop 3D photography (Bouguet)

    1. Traditional scanning is expensive, but...
         desklamp + pencil = structured light
    2. The shadow scanning technique
       - Indoor: on the desktop
       - Outdoor: the sun as structured light
    3. Calibration issues
    4. Temporal analysis for improved accuracy
    5. Error Analysis


H. 2:55 - 3:35, 40 min  

  Shape and appearance from images and range data (Curless)

    1. Registration
    2. Reconstruction from point clouds
    3. Reconstruction from range images
       - Zippering
       - Volumetric merging
    4. Modeling appearance


<> 3:35 - 3:50 Break


I. 3:50 - 4:40, 50 min  

  Application: The Digital Michelangelo Project (Levoy)

    1. Goals
       - Capturing the shape and apperance of:
          - Michelangelo's sculptures
          - Renaissance architecture
    2. Motivation
       - Scholarly inquiry
       - Preservation through digital archiving
       - Virtual museums
       - High fidelity reproductions
    3. Design requirements
       - Geometry: from chisel marks to building facades 
       - Appearance: reflectance of wood, stone, marble
    4. Custom scanning hardware 
    5. Capturing appearance with high resolution photographs 


J. 4:40 - 5:00, 20 min  

  Discussion: 3D cameras and the future of photography (Everyone)

    1. What are the killer apps for 3D photography?
    2. When are passive vs. active techniques appropriate?
    3. How will consumer-grade technology influence 3D photography?
    4. Will 3D photography itself become a consumer product?

<> Adjourn

Course History

A previous version of this course was taught at SIGGRAPH 1999, also organized by Curless and Seitz. This year's version adds a sixth speaker (Shree Nayar) in addition to the five that presented in 1999.

Summary Statement

This course provides an introduction to 3D photography: the process of using cameras and light to capture the shape and appearance of real objects. Methods include both passive and active vision techniques ranging from stereo, structure from motion, and photogrammetry to imaging radar, optical triangulation, and interferometry. The course introduces these fundamental methods, provides in-depth analysis of several emerging techniques, and concludes with a field study: capturing 3D photographs of Michelangelo's statues.

Expanded Statement

3D photography is an emerging technology for capturing richly textured, 3D models of real objects and scenes. While optical cameras measure the visible light radiated from the scene, 3D photography systems measure scene geometry and color. The combination of these two technologies has the potential to change the face of computer graphics by providing an effective means of constructing graphical scenes of unparalleled detail and realism.

This course presents the current state of the art in 3D photography and describes the principles behind a number of current techniques. We will introduce the fundamental concepts, provide a survey of a variety of techniques, and then examine in detail a few successful approaches at the forefront of 3D photography, presented by leading researchers in the field. In particular, the course will examine optical methods, including stereo vision, photogrammetry, structured light, and laser range scanners. The course will provide a forum for presenting a range of different techniques and discussing the relative merits and weaknesses of current approaches.

Prerequisites

Participants will benefit from an understanding of basic techniques for representing and rendering surfaces and volumes. In particular, the course will assume familiarity with triangular meshes, voxels, and implicit functions (isosurfaces of volumes). Rendering concepts will include light interaction with surfaces (e.g., diffuse and specular reflection) and the mathematics of perspective projection. Understanding of basic image-processing will also be important. Experience with still photography will be helpful.

Topics Beyond Prerequisites

The course will cover a variety of methods for recovering shape from images. Introductory material will describe the fundamentals of cameras from lenses to CCD's and ways of calibrating them. A number of standard and emerging passive vision methods will be presented, including stereo, structure from motion, shape from focus/defocus, shape from shading, interactive photogrammetry, and voxel coloring. Active vision methods will include imaging radar, optical triangulation, moire, active stereo, active depth from defocus, and desktop shadow striping. An overview of reconstructing shape and appearance from range images will be followed by a presentation of the Digital Michelangelo Project.

Course Notes Description

Course notes will consist of copies of the speakers slides, images and video clips of some of the demonstrations, a bibliography of related work, copies of related papers, and links to online reference materials relating to 3D Photography.

Special Presentation Requirements

None.

Special Notes Requirements

None.

Note to the Reviewers

At the time of submission, we are aware of one other course proposal that we regard to be complementary to this course, on the topic of image based modeling and rendering. The IBMR course discusses some passive vision techniques, but is more directed toward using images for re-rerendering rather than precise geometric reconstruction. One of the organizers for this course is Paul Debevec, who has also agreed to be a speaker for our course.

Due to their complementary nature, in the event that our course and the IBMR course are accepted, we would recommend that the two courses not take place on the same day. A logical choice would be to have ours preceed the IBMR course.

In addition, when reviewing the course, it might be helpful to examine last year's course notes. These notes may be found at: http://www.cs.cmu.edu/~seitz/course/3DPhoto-sigg99.html