Programmable Light Curtains

Sense what you want. When you want. Where you want.


Overview

Depth sensors like LiDARs and Kinect use a fixed depth acquisition strategy that is independent of the scene of interest. Due to the low spatial and temporal resolution of these sensors, this strategy can undersample parts of the scene that are important (small or fast moving objects), or oversample areas that are not informative for the task at hand (a fixed planar wall).

We've developed an approach and system to dynamically and adaptively sample the depths of a scene. The approach directly detects the presence or absence of objects at specified 3D lines. These 3D lines can be sampled sparsely, non-uniformly, or densely only at specified regions. The depth sampling can be varied in real-time, enabling quick object discovery or detailed exploration of areas of interest. The controllable nature of light curtains presents a challenge: the user must specify which regions of the scene light curtains will be placed.

We have designed novel algorithms using a combination of machine learning, computer vision, planning and dynamic programming that program light curtains for accurate depth estimation, semantic object detection, obstacle detection and avoidance. Please look at our publications for more details.

LiDAR
Light Curtain

How it works

Triangulation Light Curtain Principle

A light curtain consists of an illumination plane and an imaging plane. In a traditional safety light curtain, such as those used in elevators, these are precisely aligned facing each other to detect anything that breaks the light plane between them. These traditional light curtains are very reliable, but only detect objects in a plane, and are difficult to reconfigure.

A programmable light curtain device places the illumination and imaging planes side-by-side so that they intersect in a line. If there is nothing along this line, then the camera doesn't see anything. But if there is an object along this line, the light is reflected towards the camera and the object is detected. By changing the angles between the imaging plane and illumination plane this line is swept through a volume to create a light curtain. The sequence of plane angles is determined by triangulation from a specified light curtain and can be changed in real-time to generate many light curtains per second.

Since the illumination and imaging are synchronized and focused at a single line, the exposure can be very short (~100us). This short exposure integrates very little ambient light while still collecting all the light from the illumination system.

Optical Schematic

The illumination system uses a custom-built light sheet projector comprised of a laser, collimation lens, a lens to fan the laser in to a line, and a galvo mirror to direct the laser line. The imaging side contains a rolling shutter 2D camera. The light sheet projector is emitting a plane of light and the imaging side is capturing a plane of light with the rolling shutter camera. The motion of the galvomirror-steered light sheet is synced with the progression of the camera's rolling shutter to intersect the imaging plane and lighting plane along the curtain profile. This scanning happens at the full frame rate of the camera producing 60 light curtains per second.

Optical Schematic

Prototype

Our light curtain prototype consists of:

  • Light Sheet Projector illumination system using a 1D light source and galvomirror
  • Imaging system using a 2D rolling shutter camera
  • 2D helper camera for visualization only

Performance Specs:

  • Resolution: 512x640
  • FOV: 40°(h) x 45°(v)
  • Baseline: 20 cm between light sheet projector and camera
  • Outdoor Range in sunlight (white scene): 20 meters
  • Indoor Range (white scene): 50+ meters
  • Frame Rate: 60 Hz

Our prototype


Publications


image not found
RSS 2021
Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
Siddharth Ancha, Gaurav Pathak, Srinivasa G. Narasimhan, David Held


To safely navigate unknown environments, robots must accurately perceive dynamic obstacles. Instead of directly measuring the scene depth with a LiDAR sensor, we explore the use of a much cheaper and higher resolution sensor: programmable light curtains. Light curtains are controllable depth sensors that sense only along a surface that a user selects. We use light curtains to estimate the safety envelope of a scene: a hypothetical surface that separates the robot from all obstacles. We show that generating light curtains that sense random locations (from a particular distribution) can quickly discover the safety envelope for scenes with unknown objects. Importantly, we produce theoretical safety guarantees on the probability of detecting an obstacle using random curtains. We combine random curtains with a machine learning based model that forecasts and tracks the motion of the safety envelope efficiently. Our method accurately estimates safety envelopes while providing probabilistic safety guarantees that can be used to certify the efficacy of a robot perception system to detect and avoid dynamic obstacles. We evaluate our approach in a simulated urban driving environment and a real-world environment with moving pedestrians using a light curtain device and show that we can estimate safety envelopes efficiently and effectively.

@inproceedings{Ancha-RSS-21, 
    author    = {Siddharth Ancha AND Gaurav Pathak AND Srinivasa Narasimhan AND David Held}, 
    title     = {Active Safety Envelopes using Light Curtains with Probabilistic Guarantees}, 
    booktitle = {Proceedings of Robotics: Science and Systems}, 
    year      = {2021}, 
    address   = {Virtual}, 
    month     = {July}, 
    doi       = {10.15607/rss.2021.xvii.045} 
}
image not found
CVPR 2021
Exploiting & Refining Depth Distributions with Triangulation Light Curtains
Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, Srinivasa Narasimhan


Active sensing through the use of adaptive depth sensors is a nascent field, with potential in areas such as advanced driver-assistance systems (ADAS). They do however require dynamically driving a laser / light-source to a specific location to capture information, with one such class of sensors being programmable light curtains. In this work, we introduce a novel approach that exploits prior depth distributions from RGB cameras to drive a light curtain's laser line to regions of uncertainty to get new measurements. These measurements are utilized such that depth uncertainty is reduced and errors get corrected recursively. We show real-world experiments that validate our approach in outdoor and driving settings, and demonstrate qualitative and quantitative improvements in depth RMSE when RGB cameras are used in tandem with a light curtain.

@inproceedings{cvpr2021raajexploiting,
    author    = {Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, Srinivasa Narasimhan},
    title     = {Exploiting and Refining Depth Distributions with Triangulation Light Curtains},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2021}
}
image not found
ECCV 2020
Spotlight
Active Perception using Light Curtains for Autonomous Driving
Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, Srinivasa Narasimhan, David Held


Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network's uncertainty, we derive an optimization objective to place light curtains using the principle of maximizing information gain. Then, we develop a novel and efficient optimization algorithm to maximize this objective by encoding the physical constraints of the device into a constraint graph and optimizing with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy.

@inproceedings{ancha2020eccv,
    author    = {Ancha, Siddharth AND Raaj, Yaadhav AND Hu, Peiyun AND Narasimhan, Srinivasa G. AND Held, David},
    editor    = {Vedaldi, Andrea AND Bischof, Horst AND Brox, Thomas AND Frahm, Jan-Michael},
    title     = {Active Perception Using Light Curtains for Autonomous Driving},
    booktitle = {Computer Vision -- ECCV 2020},
    year      = {2020},
    publisher = {Springer International Publishing},
    address   = {Cham},
    pages     = {751--766},
    isbn      = {978-3-030-58558-7}
}
image not found
ICCV 2019
Oral
Agile Depth Sensing Using Triangulation Light Curtains
Joseph Bartels, Jian Wang, William ‘Red’ Whittaker, Srinivasa Narasimhan


Depth sensors like LIDARs and Kinect use a fixed depth acquisition strategy that is independent of the scene of interest. Due to the low spatial and temporal resolution of these sensors, this strategy can undersample parts of the scene that are important (small or fast moving objects), or oversample areas that are not informative for the task at hand (a fixed planar wall). In this paper, we present an approach and system to dynamically and adaptively sample the depths of a scene using the principle of triangulation light curtains. The approach directly detects the presence or absence of objects at specified 3D lines. These 3D lines can be sampled sparsely, non-uniformly, or densely only at specified regions. The depth sampling can be varied in real-time, enabling quick object discovery or detailed exploration of areas of interest. These results are achieved using a novel prototype light curtain system that is based on a 2D rolling shutter camera with higher light efficiency, working range, and faster adaptation than previous work, making it useful broadly for autonomous navigation and exploration.

@inproceedings{bartels2019agile,
    title     = {Agile depth sensing using triangulation light curtains},
    author    = {Bartels, Joseph R and Wang, Jian and Whittaker, William and Narasimhan, Srinivasa G and others},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
    pages     = {7900--7908},
    year      = {2019}
}
image not found
ECCV 2018
Oral
Programmable Triangulation Light Curtains
Jian Wang, Joseph Bartels, William ‘Red’ Whittaker, Aswin Sankaranarayanan, Srinivasa Narasimhan


A vehicle on a road or a robot in the field does not need a full-featured 3D depth sensor to detect potential collisions or monitor its blind spot. Instead, it needs to only monitor if any object comes within its near proximity which is an easier task than full depth scanning. We introduce a novel device that monitors the presence of objects on a virtual shell near the device, which we refer to as a light curtain. Light curtains offer a light-weight, resource-efficient and programmable approach to proximity awareness for obstacle avoidance and navigation. They also have additional benefits in terms of improving visibility in fog as well as flexibility in handling light fall-off. Our prototype for generating light curtains works by rapidly rotating a line sensor and a line laser, in synchrony. The device is capable of generating light curtains of various shapes with a range of 20-30m in sunlight (40m under cloudy skies and 50m indoors) and adapts dynamically to the demands of the task. We analyze properties of light curtains and various approaches to optimize their thickness as well as power requirements. We showcase the potential of light curtains using a range of real-world scenarios.

@inproceedings{wang2018programmable,
    title     = {Programmable triangulation light curtains},
    author    = {Wang, Jian AND Bartels, Joseph AND Whittaker, William AND Sankaranarayanan, Aswin C AND Narasimhan, Srinivasa G},
    booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
    pages     = {19--34},
    year      = {2018}
}

People

(randomized order)


Sponsors


Maintained by Siddharth Ancha
Contact: sancha@cs.cmu.edu