Patents Pending

Programmable Triangulation Light Curtains

Fully programmable and adaptive light curtains for object detection on any ruled surface.


News

We have developed a new Programmable Light Curtain device that is capable of imaging 60 different curtains per second and uses a conventional rolling shutter camera. Check it out here.

Imaging Capabilities

A vehicle on a road or a robot in the field does not need a full-featured 3D depth sensor to detect potential collisions or monitor its blind spots. Instead, it needs to only monitor if any object comes within its near proximity which is an easier task than full depth scanning. We introduce a novel device that monitors the presence of objects on a virtual shell near the device, which we refer to as a light curtain. Programmable Light curtains offer a light-weight, resource-efficient and programmable approach to proximity awareness for obstacle avoidance and navigation. They also have additional benefits in terms of improving visibility in fog and smoke as well as flexibility in handling light fall-off. Our prototype for generating light curtains works by rapidly rotating a line sensor and a line laser, in synchrony. The device is capable of generating light curtains of various shapes with a range of 20-30m in sunlight (40m under cloudy skies and 50m indoors) and adapts dynamically to the demands of the task.


Simple, Versatile, Efficient

Normal light curtains such as those found in elevators, garage doors, and industrial equipment only detect obstacles on a specific plane or line and are not easily reconfigurable. Our programmable light curtain device on the other hand can quickly and dynamically adjust system parameters to image any ruled surface in a volume. The raw data from the device provides the object detections and there is no need for complex processing or pointcloud parsing. These devices have applications in outdoor imaging, driver safety systems, mobile robotics, robotic manufacturing, assistive robotics, and industrial manufacturing.


Imaging Through Scattering Media

Programmable light curtains can block scattered light to see through scattering media. The light curtain only receives light from the intersection of the camera and line sensor, effectively blocking almost all other light in the volume which significantly reduces scattered light. The device will still receive a small amount of scattered light, but it will be much smaller than normal imaging systems.


Outdoor Performance

Programmable light curtains have excellent performance in ambient light. One reason for this is the concentration of light and imaging into lines. By projecting lines of light and using lines of imaging rather than emitting broad flashes of light, the received light per unit area is greater enabling long range imaging. This is similar to our previous work we’ve demonstrated with the Episcan3D and EpiToF prototypes.

Another reason for the high-performance is ambient light subtraction. To increase detection ability of the light curtain we capture two images with the light curtain at each step. One with the laser on and one with it off to capture just the ambient light. This ambient image is then directly subtracted from the other image to get only the contribution from the laser light. With this approach the programmable light curtain device can see a white board over 25 meters away in bright sunlight.

Performance in Ambient Light


Depth Maps

By sweeping a single fronto-parallel plane through a volume a depth map can be generated. This video shows a depth map captured using 80 discrete depth slices.


Depth Adaptive Imaging

Programmable light curtains can adjust the camera exposure and laser power to adaptively capture the scene in a single shot. For example, the exposure of the scene can be decreased at close ranges and increased for longer ranges to create a complete depth map with uniform intensity. Similarly, laser power can be adjusted to reduce power at close ranges and increase it for longer ranges.

This type of single-shot depth adaptive imaging is impossible with LIDAR or other systems where you don’t know the depth of the scene being imaged. Since the light curtain is designed specifically for certain depths, the system can be adaptively tuned for better results.

Depth Adaptive Exposure
Depth Adaptive Laser Power


Applications

Programmable light curtains are a highly efficient and versatile method for obstacle detection. They work in bright ambient light, require very low computation power, and have applications in industrial manufacturing, outdoor sensing, mobile robotics, robotic manufacturing, assistive robotics, and advanced driver assistance systems (ADAS).

Driver Safety Systems

Programmable light curtains can be used for applications in driver safey systems. They can be used for vehical lane monitoring and to monitor vehicles backing up out of parking spaces. Other applications include monitoring pedestrians entering roadway from sidewalk and in crosswalks.


How it Works

The Working Principle

A light curtain consists of an illumination plane and an imaging plane. In a traditional light curtain these are precisely aligned facing each other to detect anything that breaks the light plane between them. These traditional light curtains are very reliable, but only detect objects in a plane, and are difficult to reconfigure.

A programmable light curtain device places the illumination and imaging planes side-by-side so that they intersect in a line. If there is nothing along this line, then the camera doesn't see anything. But if there is an object along this line, the light is reflected towards the camera and the object is detected. By changing the angles between the imaging plane and illumination plane this line is swept through a volume to create a light curtain. The sequence of plane angles is determined by triangulation from a specified light curtain and can be changed in real-time to generate many light curtains per second.

The raw data from the camera is thresholded to generate a binary image of where obstacles were detected along the curtain. This provides a very simple, reliable, and efficient means for object detection.

Since the illumination and imaging are synchronized and focused at a single line, the exposure can be very short (~100us). This short exposure integrates very little ambient light while still collecting all the light from the illumination system.


Optical Schematic

The illumination side is composed of a laser, collimation lens, a lens to fan the laser in to a line, and a galvo mirror to direct the laser line. The imaging side contains a line camera, a small lens, and another galvomirror to direct the viewing angle. The illumination side is emitting a plane of light and the imaging side is capturing a plane of light with the line sensor. By moving the galvomirrors, the planes of illumination and imaging can be pointed to intersect at different positions. Scanning this intersection quickly forms a light curtain.

Optical Schematic

The Prototype

The Programmable Light Curtain prototype consists of:

  • Illumination system using a 1D light source and galvomirror
  • Imaging system using a 1D line imager and galvomirror
  • 2D helper camera for visualization and calibration only

Performance Specs:

  • Resolution: 200x1000
  • FOV: 45°(h) x 45 °(v)
  • Baseline: 15 cm between imaging and illumination centers
  • Range in sunlight (white scene)*: 25 meters
  • Range in darkness (white scene): 50+ meters
  • Frame Rate: 5.6 Hz
  • *See paper for range details

Our prototype

More Details

For an in-depth description of the technology behind Programmable Light Curtains, please refer to our paper and the accompanying video.

Jian Wang, Joseph R. Bartels, William L. ‘Red’ Whittaker, Aswin C. Sankaranarayanan, Srinivasa G. Narasimhan. "Programmable Triangulation Light Curtains", ECCV 2018

Link to paper

About

The programmable light curtain technology was developed by researchers in Carnegie Mellon's Robotics Institute and Department of Electrical and Computer Engineering.

Patents Pending.



Sponsors

This research was supported in parts by an ONR grant N00014-15-1-2358, an ONR DURIP Award N00014-16-1-2906, and DARPA REVEAL Co-operative Agreement HR0011-16-2-0021. A. C. Sankaranarayanan was supported in part by the NSF CAREER grant CCF-1652569. J. Bartels was supported by NASA fellowship NNX14AM53H.

Copyright © 2018 Joe Bartels