Programmable Light Curtains

Sense what you want. When you want. Where you want.


Overview

Depth sensors like LiDARs and Kinect use a fixed depth acquisition strategy that is independent of the scene of interest. Due to the low spatial and temporal resolution of these sensors, this strategy can undersample parts of the scene that are important (small or fast moving objects), or oversample areas that are not informative for the task at hand (a fixed planar wall).

We've developed an approach and system to dynamically and adaptively sample the depths of a scene. The approach directly detects the presence or absence of objects at specified 3D lines. These 3D lines can be sampled sparsely, non-uniformly, or densely only at specified regions. The depth sampling can be varied in real-time, enabling quick object discovery or detailed exploration of areas of interest. The controllable nature of light curtains presents a challenge: the user must specify which regions of the scene light curtains will be placed.

We have designed novel algorithms using a combination of machine learning, computer vision, planning and dynamic programming that program light curtains for accurate depth estimation, semantic object detection, obstacle detection and avoidance. Please look at our publications for more details.

LiDAR
Light Curtain

How it works

Triangulation Light Curtain Principle

A light curtain consists of an illumination plane and an imaging plane. In a traditional safety light curtain, such as those used in elevators, these are precisely aligned facing each other to detect anything that breaks the light plane between them. These traditional light curtains are very reliable, but only detect objects in a plane, and are difficult to reconfigure.

A programmable light curtain device places the illumination and imaging planes side-by-side so that they intersect in a line. If there is nothing along this line, then the camera doesn't see anything. But if there is an object along this line, the light is reflected towards the camera and the object is detected. By changing the angles between the imaging plane and illumination plane this line is swept through a volume to create a light curtain. The sequence of plane angles is determined by triangulation from a specified light curtain and can be changed in real-time to generate many light curtains per second.

Since the illumination and imaging are synchronized and focused at a single line, the exposure can be very short (~100us). This short exposure integrates very little ambient light while still collecting all the light from the illumination system.

Optical Schematic

The illumination system uses a custom-built light sheet projector comprised of a laser, collimation lens, a lens to fan the laser in to a line, and a galvo mirror to direct the laser line. The imaging side contains a rolling shutter 2D camera. The light sheet projector is emitting a plane of light and the imaging side is capturing a plane of light with the rolling shutter camera. The motion of the galvomirror-steered light sheet is synced with the progression of the camera's rolling shutter to intersect the imaging plane and lighting plane along the curtain profile. This scanning happens at the full frame rate of the camera producing 60 light curtains per second.

Optical Schematic

Prototype

Our light curtain prototype consists of:

  • Light Sheet Projector illumination system using a 1D light source and galvomirror
  • Imaging system using a 2D rolling shutter camera
  • 2D helper camera for visualization only

Performance Specs:

  • Resolution: 512x640
  • FOV: 40°(h) x 45°(v)
  • Baseline: 20 cm between light sheet projector and camera
  • Outdoor Range in sunlight (white scene): 20 meters
  • Indoor Range (white scene): 50+ meters
  • Frame Rate: 60 Hz

Our prototype


Publications


image not found
RSS 2023
Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits
Siddharth Ancha, Gaurav Pathak, Ji Zhang, Srinivasa Narasimhan, David Held


To navigate in an environment safely and autonomously, robots must accurately estimate where obstacles are and how they move. Instead of using expensive traditional 3D sensors, we explore the use of a much cheaper, faster, and higher resolution alternative: programmable light curtains. Light curtains are a controllable depth sensor that sense only along a surface that the user selects. We adapt a probabilistic method based on particle filters and occupancy grids to explicitly estimate the position and velocity of 3D points in the scene using partial measurements made by light curtains. The central challenge is to decide where to place the light curtain to accurately perform this task. We propose multiple curtain placement strategies guided by maximizing information gain and verifying predicted object locations. Then, we combine these strategies using an online learning framework. We propose a novel self-supervised reward function that evaluates the accuracy of current velocity estimates using future light curtain placements. We use a multi-armed bandit framework to intelligently switch between placement policies in real time, outperforming fixed policies. We develop a full-stack navigation system that uses position and velocity estimates from light curtains for downstream tasks such as localization, mapping, path-planning, and obstacle avoidance. This work paves the way for controllable light curtains to accurately, efficiently, and purposefully perceive and navigate complex and dynamic environments.

@inproceedings{ancha2023rss,
    title     = {Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits},
    author    = {Siddharth Ancha AND Gaurav Pathak AND Ji Zhang AND Srinivasa Narasimhan AND David Held}, 
    booktitle = {Proceedings of Robotics: Science and Systems}, 
    year      = {2023}, 
    address   = {Daegu, Republic of Korea}, 
    month     = {July}, 
}
CVPR 2022
Holocurtains: Programming Light Curtains via Binary Holography
Dorian Chan, Srinivasa Narasimhan, Matthew O'Toole


Light curtain systems are designed for detecting the presence of objects within a user-defined 3D region of space, which has many applications across vision and robotics. However, the shape of light curtains have so far been limited to ruled surfaces, i.e., surfaces composed of straight lines. In this work, we propose Holocurtains: a light-efficient approach to producing light curtains of arbitrary shape. The key idea is to synchronize a rolling-shutter camera with a 2D holographic projector, which steers (rather than block) light to generate bright structured light patterns. Our prototype projector uses a binary digital micromirror device (DMD) to generate the holographic interference patterns at high speeds. Our system produces 3D light curtains that cannot be achieved with traditional light curtain setups and thus enables all-new applications, including the ability to simultaneously capture multiple light curtains in a single frame, detect subtle changes in scene geometry, and transform any 3D surface into an optical touch interface.

@article{chan2022holocurtains,
    title   = {Holocurtains: Programming Light Curtains via Binary Holography},
    author  = {Chan, Dorian and Narasimhan, Srinivasa and O'Toole, Matthew},
    journal = {Computer Vision and Pattern Recognition},
    year    = {2022}
}
RSS 2021
Active Safety Envelopes using Light Curtains with Probabilistic Guarantees
Siddharth Ancha, Gaurav Pathak, Srinivasa G. Narasimhan, David Held


To safely navigate unknown environments, robots must accurately perceive dynamic obstacles. Instead of directly measuring the scene depth with a LiDAR sensor, we explore the use of a much cheaper and higher resolution sensor: programmable light curtains. Light curtains are controllable depth sensors that sense only along a surface that a user selects. We use light curtains to estimate the safety envelope of a scene: a hypothetical surface that separates the robot from all obstacles. We show that generating light curtains that sense random locations (from a particular distribution) can quickly discover the safety envelope for scenes with unknown objects. Importantly, we produce theoretical safety guarantees on the probability of detecting an obstacle using random curtains. We combine random curtains with a machine learning based model that forecasts and tracks the motion of the safety envelope efficiently. Our method accurately estimates safety envelopes while providing probabilistic safety guarantees that can be used to certify the efficacy of a robot perception system to detect and avoid dynamic obstacles. We evaluate our approach in a simulated urban driving environment and a real-world environment with moving pedestrians using a light curtain device and show that we can estimate safety envelopes efficiently and effectively.

@inproceedings{Ancha-RSS-21, 
    author    = {Siddharth Ancha AND Gaurav Pathak AND Srinivasa Narasimhan AND David Held}, 
    title     = {Active Safety Envelopes using Light Curtains with Probabilistic Guarantees}, 
    booktitle = {Proceedings of Robotics: Science and Systems}, 
    year      = {2021}, 
    address   = {Virtual}, 
    month     = {July}, 
    doi       = {10.15607/rss.2021.xvii.045} 
}
image not found
CVPR 2021
Exploiting & Refining Depth Distributions with Triangulation Light Curtains
Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, Srinivasa Narasimhan


Active sensing through the use of adaptive depth sensors is a nascent field, with potential in areas such as advanced driver-assistance systems (ADAS). They do however require dynamically driving a laser / light-source to a specific location to capture information, with one such class of sensors being programmable light curtains. In this work, we introduce a novel approach that exploits prior depth distributions from RGB cameras to drive a light curtain's laser line to regions of uncertainty to get new measurements. These measurements are utilized such that depth uncertainty is reduced and errors get corrected recursively. We show real-world experiments that validate our approach in outdoor and driving settings, and demonstrate qualitative and quantitative improvements in depth RMSE when RGB cameras are used in tandem with a light curtain.

@inproceedings{cvpr2021raajexploiting,
    author    = {Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, Srinivasa Narasimhan},
    title     = {Exploiting and Refining Depth Distributions with Triangulation Light Curtains},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2021}
}
image not found
ECCV 2020
Spotlight
Active Perception using Light Curtains for Autonomous Driving
Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, Srinivasa Narasimhan, David Held


Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network's uncertainty, we derive an optimization objective to place light curtains using the principle of maximizing information gain. Then, we develop a novel and efficient optimization algorithm to maximize this objective by encoding the physical constraints of the device into a constraint graph and optimizing with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy.

@inproceedings{ancha2020eccv,
    author    = {Ancha, Siddharth AND Raaj, Yaadhav AND Hu, Peiyun AND Narasimhan, Srinivasa G. AND Held, David},
    editor    = {Vedaldi, Andrea AND Bischof, Horst AND Brox, Thomas AND Frahm, Jan-Michael},
    title     = {Active Perception Using Light Curtains for Autonomous Driving},
    booktitle = {Computer Vision -- ECCV 2020},
    year      = {2020},
    publisher = {Springer International Publishing},
    address   = {Cham},
    pages     = {751--766},
    isbn      = {978-3-030-58558-7}
}
image not found
ICCV 2019
Oral
Agile Depth Sensing Using Triangulation Light Curtains
Joseph Bartels, Jian Wang, William ‘Red’ Whittaker, Srinivasa Narasimhan


Depth sensors like LIDARs and Kinect use a fixed depth acquisition strategy that is independent of the scene of interest. Due to the low spatial and temporal resolution of these sensors, this strategy can undersample parts of the scene that are important (small or fast moving objects), or oversample areas that are not informative for the task at hand (a fixed planar wall). In this paper, we present an approach and system to dynamically and adaptively sample the depths of a scene using the principle of triangulation light curtains. The approach directly detects the presence or absence of objects at specified 3D lines. These 3D lines can be sampled sparsely, non-uniformly, or densely only at specified regions. The depth sampling can be varied in real-time, enabling quick object discovery or detailed exploration of areas of interest. These results are achieved using a novel prototype light curtain system that is based on a 2D rolling shutter camera with higher light efficiency, working range, and faster adaptation than previous work, making it useful broadly for autonomous navigation and exploration.

@inproceedings{bartels2019agile,
    title     = {Agile depth sensing using triangulation light curtains},
    author    = {Bartels, Joseph R and Wang, Jian and Whittaker, William and Narasimhan, Srinivasa G and others},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
    pages     = {7900--7908},
    year      = {2019}
}
image not found
ECCV 2018
Oral
Programmable Triangulation Light Curtains
Jian Wang, Joseph Bartels, William ‘Red’ Whittaker, Aswin Sankaranarayanan, Srinivasa Narasimhan


A vehicle on a road or a robot in the field does not need a full-featured 3D depth sensor to detect potential collisions or monitor its blind spot. Instead, it needs to only monitor if any object comes within its near proximity which is an easier task than full depth scanning. We introduce a novel device that monitors the presence of objects on a virtual shell near the device, which we refer to as a light curtain. Light curtains offer a light-weight, resource-efficient and programmable approach to proximity awareness for obstacle avoidance and navigation. They also have additional benefits in terms of improving visibility in fog as well as flexibility in handling light fall-off. Our prototype for generating light curtains works by rapidly rotating a line sensor and a line laser, in synchrony. The device is capable of generating light curtains of various shapes with a range of 20-30m in sunlight (40m under cloudy skies and 50m indoors) and adapts dynamically to the demands of the task. We analyze properties of light curtains and various approaches to optimize their thickness as well as power requirements. We showcase the potential of light curtains using a range of real-world scenarios.

@inproceedings{wang2018programmable,
    title     = {Programmable triangulation light curtains},
    author    = {Wang, Jian AND Bartels, Joseph AND Whittaker, William AND Sankaranarayanan, Aswin C AND Narasimhan, Srinivasa G},
    booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
    pages     = {19--34},
    year      = {2018}
}

Code


People

(randomized order)


Sponsors


Maintained by Siddharth Ancha
Contact: sancha@cs.cmu.edu