Texture-Illumination Separation for Single-shot Structured Light Reconstruction

Flags Pattern

Active illumination based methods have a trade-off between acquisition time and resolution of the estimated 3D shapes. Multi-shot approaches can generate dense reconstructions but require stationary scenes. In contrast, singleshot methods are applicable to dynamic objects but can only estimate sparse reconstructions and are sensitive to surface texture. In this work, we develop a single-shot approach to produce dense reconstructions of highly textured objects. The key to our approach is an image decomposition scheme that can recover the illumination and the texture images from their mixed appearance. Despite the complex appearances of the illuminated textured regions, our method can accurately compute per pixel warps from the illumination pattern and the texture template to the observed image. The texture template is obtained by interleaving the projection sequence with an all-white pattern. Our estimated warping functions are reliable even with infrequent interleaved projection. Thus, we obtain detailed shape reconstruction and dense motion tracking of the textured surfaces. We validate the approach on synthetic and real data containing subtle non-rigid surface deformations.

Publications


"Separating Texture and Illumination for Single-Shot Structured Light Reconstruction"
M. Vo, S. G. Narasimhan, and Y. Sheikh,
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,
June 2014.
[PDF] [PPT] [PPT with video]

Technical summary

Because of the highly textured surface, applying a spatial decoding of the high frequency light pattern directly on the observed image is not possible

The projector pattern serves as the illumination template. The texture template is obtained by interleaving the projecting sequencce with a white pattern.

Affine warping functions are applied to both the texture and illumination template to synthesize the observed image. Once the mapping from the light pattern in the observed to its template is recovered, the 3D shape is computed from triangulation.

Results

(Video Result Playlist)

The texture-illumination greedy growing process. The propogation uses the estimated affine warping coefficients of previous points as initial guess.

Tshirt with flag patterns scene: input imagges, estimated texture and illumination images, computed texture and illumination flow, and recovered 3D shape

Acknowledgements


This research was supported in parts by an ONR Grant N00014-11-1-0295, a NSF Grant IIS-1317749, and a NSF Grant No. 1353120.