Near-Light Photometric Stereo using Circularly Placed Point Light Sources

Most photometric stereo approaches assume distant or directional lighting and orthographic imaging. However, when the source is divergent and is near the object and the camera is projective, the image intensity of a Lambertian object is a non-linear function of both the unknown surface normals and the unknown distances of the source to the surface points. The resulting non-linear optimization is non-convex and highly sensitive to the initial guess. In this paper, we propose a two-stage near-light photometric stereo method using circularly placed point light sources (commonly seen in recent consumer imaging devices like NESTcam, Amazon Cloudcam, etc). We represent the scene using a 3D mesh and directly optimize the vertices of the mesh. This reduces the complexity of the relationship between surface normals and depths in the image formation model. In the first stage, we optimize the vertex positions using the differential images induced by small changes in light source position. This procedure yields a strong initial guess for the second stage that refines the estimations using the raw captured images. We propose an accurate calibration approach to estimate the positions of the sources. Our approach performs better on simulations and on real Lambertian scenes with complex shapes than the state-of-the-art method with near-field lighting.


" Near-Light Photometric Stereo using Circularly Placed Point Light Sources "
Chao Liu, Srinivasa G. Narasimhan and Artur W. Dubrawski
IEEE International Conference on Computational Photography (ICCP) 2018
[PDF] [supp] [slides (pdf)] [code]


Our imaging setup with a 30 mm radius ring of 24 LEDs controlled using an Arduino board. The object is placed at a distance up to 20 times the LED ring radius away from the camera and light sources.

The geometry of differential change of light source positions. The light sources are densely mounted on a planar circle centered around the camera. Because the distance between adjacent LEDs are small, we can differentiate the image intensity w.r.t the light source index to model the differential images.

We calibrate the camera and the point light source positions using a planar glossy display (Macbook monitor). We first capture images with the display turned on and light source turned off, from which we estimate the camera intrinsic matrix and plane parameters. Then, we capture the images with LEDs turned off and the light sources turned on to get the reflections of light sources.

Due to the small ratio between the LED ring radius and the object-to-camera distance, the difference between images is very small even for the pair of LEDs with the largest baselines. Despite the small difference, our method still performs well.


We show the input images captured with different LEDs turned on and the corresponding 3D reconstructions.


This work has been supported in part by the NSF (Expeditions 1730147 and CNS 1446601) , ONR (N00014- 14-1- 0595) and DARPA (FA8750-17-2-0130).