3D Stereoscopic Video Display Systems Laboratory
|For a technical overview of our 3D stereoscopic display system research see:|
We are working with ARPA High Definition Systems Program funding on three advanced visual display system (AVDS) issues: computers for the television set of the future, high definition flat panel display manufacturing technology, and 3D-stereoscopic display of images and graphics. My lab is the locus of the the 3D-stereoscopy program. The effort encompasses issues of encoding, transmission, compression and decompression, display hardware matched to the psychophysics of binocular perception, and also quantifying the utility of 3D-stereoscopy for rendering complex data, simulated reality, and intricate spatial relationships.
Our perception of depth via the sensation of binocular stereopsis is due to our brain's ability to compute range estimates from the two perspectives cast on our left and right retinas. If we had only one eye the depth sensation would be absent, and our perception of the world would be correspondingly ambiguous. Reproducing or synthesizing stereopsis requires reproducing or synthesizing the appropriate perspectives on two displays, one serving each eye. The two displays are generally realized by a single screen and some means of multiplexing, e.g., spatial, angular, temporal, polarization, chromatic, etc. We use temporal multiplexing when we need the highest possible spatial resolution, and we use a convenient and relatively inexpensive hybrid spatial/polarization method for less critical applications.
The niche we have chosen in the 3D-display world is the minimalist one: graft just enough technology onto the existing 2D-infrastructure to resolve crucial depth ambiguities. We contrast this with the "total immersion," "virtual reality," "volumetric display," etc, approaches, which we feel impose an unacceptable overhead on the majority of users who need only simple video imagery or spreadsheet graphics displayed in low-tech, low-cost, low-eyestrain 3D.
We are researching:
|A novel approach to parallax panoramagrams using time multiplexing is proposed in A Time-Multiplexed Autostereoscopic Display Based on Moving Parallax Barriers.||
Our video work is driven by its application to remote visual inspection of aircraft for cracks and corrosion. The following papers best describe our research.
Such a visual inpection procedure requires geometrically correct imagery. From the beginning we have, theoretically and in practice, designed stereocameras with the proper optical and sensor systems to take stereo pictures that are easy to view. An aircraft inspector might spend hours looking at our stereoimagery without eyestrain.
|These papers discuss the correct camera and screen geometries for creating 3D-stereoscopic images.||
Our graphics work is guided by its application to the visualization of graphical representations of complex multi-dimensional data interactively aggregated from large databases. We initially worked with the Sage Group on the graphical portion of their project.
The current focus is sensor fusion and corresponding 3-D display of ultrasound, MRI, and CT data as exemplified in
Within an HDTV infrastructure, it is possible to create high-definition stereoscopic imagery, using one high-resolution color camera and two low-resolution monochrome cameras, as described in Synthesis of a High Resolution 3D-Stereoscopic Image from a High Resolution Monoscopic Image and a Low Resolution Depth Map.
Piggy-backing 3D on the existing 2D-infrastructure of TV recording, broadcast, and display is only marginally possible. The infrastructure is capable of transporting some fixed number of pixels * colors * gray levels per second; dividing this bandwidth by two to make it serve two eyes extracts an unacceptable price in image quality per eye. HDTV will help, but the effect of a naive implementation will still be to drop back to something like NTSC resolution per eye.
A more sophisticated implementation would take advantage of the high correlation between left and right perspectives to acheive compression based on left-right predictability, in the same way that video compression schemes like MPEG exploit the temporal correlation between previous and future times. Using these concepts, we have succeeded in encoding stereo image streams in only a few percent more bandwidth than is needed to encode either image stream alone.
Since these compression schemes are based on left-right predictability, we can use very similar algorithms to synthesize intermediate views, adding the illusion of "look around" to 3D stereoscopic imagery in which in fact only two perspectives are actually present.
This research is sponsored in part by Advanced Research Projects Agency Electronic Systems Technology Office High Definition Systems Program Grant MDA972-92-J-1010.
Project Related Companies & Organizations
We have been archiving stereo image sequences from various sites on the net.
|Kyung - Tae Kimemail@example.com|
Here's a summary (and history) of our other research areas.
contact person: Mel Siegel, firstname.lastname@example.org
Mel Siegel's homepage
Intelligent Measurement and Control Lab (a.k.a. The Sensor Lab) homepage
The Robotics Institute homepage
School of Computer Science homepage
Carnegie Mellon University homepage
maintained by: Alan Guisewite, email@example.com
Last Update 15 May 1998
|NOTE: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.|