Dense Surface Reconstruction from Monocular Vision and LiDAR

Download: PDF.

“Dense Surface Reconstruction from Monocular Vision and LiDAR” by Z. Li, P.C. Gogia, and M. Kaess. In Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA, (Montreal, Canada), May 2019, pp. 6905-6911.

Abstract

In this work, we develop a new surface reconstruction pipeline that combines monocular camera images and LiDAR measurements from a moving sensor rig to reconstruct dense 3D mesh models of indoor scenes. For surface reconstruction, the 3D LiDAR and camera are widely deployed for gathering geometric information from environments. Current state-of-the-art multi-view stereo or LiDAR-only reconstruction methods cannot reconstruct indoor environments accurately due to shortcomings of each sensor type. In our approach, LiDAR measurements are integrated into a multi-view stereo pipeline for point cloud densification and tetrahedralization. In addition to that, a graph cut algorithm is utilized to generate a watertight surface mesh. Because our proposed method leverages the complementary nature of these two sensors, the accuracy and completeness of the output model are improved. The experimental results on real world data show that our method significantly outperforms both the state-of-the-art camera-only and LiDAR-only reconstruction methods in accuracy and completeness.

Download: PDF.

BibTeX entry:

@inproceedings{Li19icra,
   author = {Z. Li and P.C. Gogia and M. Kaess},
   title = {Dense Surface Reconstruction from Monocular Vision and {LiDAR}},
   booktitle = {Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA},
   pages = {6905-6911},
   address = {Montreal, Canada},
   month = may,
   year = {2019}
}
Last updated: February 12, 2021