n recent times, computer vision has made great leaps towards 2D understanding of sparse visual snapshots of the world. This is insufficient for robots that need to exist and act in the 3D world around them based on a continuous stream of multi-modal inputs. In this talk, I will present some of my efforts in bridging this gap between computer vision and robotics. I will show how thinking about computer vision and robotics together, brings out limitations of current computer vision tasks and techniques, and motivates joint study of perception and action for robotic tasks. I will showcase these aspects via three examples: visual navigation, 3D scene understanding and representation learning for varied modalities. I will conclude by pointing out future research directions at the intersection of computer vision and robotics, thus showing how the two fields are ready to get back together.
Saurabh Gupta is a Ph.D. student at UC Berkeley, where he is advised by Jitendra Malik. His research interests include computer vision, robotics and machine learning. His PhD work focuses on 3D scene understanding, and visual navigation. His work is supported by a Berkeley Fellowship and a Google Fellowship in Computer Vision.