Most of my research at Carnegie Mellon has focused on intelligent vehicles. I have listed several of my projects below. Other research interests include: multimedia indexing, software library design (especially for image understanding), and artificial personalities.

For actual documents, see my publications page.

Laser Intensity-based Obstacle Detection and Tracking (Ph.D. Thesis)

Ph.D. advisors: Chuck Thorpe, Martial Hebert

Obstacle detection is necessary for any autonomous mobile robot. If cars are ever going to drive themselves on the highways, they will need reliable obstacle detection. Highways present an unknown and dynamic environment with real-time constraints. In addition, the high speeds of travel force a system to detect objects at long ranges. While there are a number of sensors and methods that can successfully detect other vehicles, the more difficult problem of detecting small, static road debris remains unsolved.

Laser range scanners, or ladars, have been used for many years for obstacle detection. Laser scanners operate by sweeping a laser across a scene and at each angle, measuring the range and returned intensity. Past researchers have ignored the intensity signal while focusing on the range returned from the laser, since the range provides direct 3-D information useful for mapping. In this thesis, I demonstrate how laser intensity alone can be used to detect and track obstacles. While laser ranging demands fast, complicated electronics, intensity can be measured cheaply. Minimizing cost will be extremely important for any consumer system, and is a strong motivating factor for this thesis.

Laser intensity provides different information from ordinary video data since lighting and viewing directions are coincident. At long ranges and grazing angles, vertical obstacles reflect significantly more laser energy than the horizontal road. Since laser intensity has been ignored in the obstacle detection literature, I have devoted a significant portion of the thesis to examining the intensity measurements provided by the scanner. I have developed a laser reflectance model which provides good results for a wide variety of object surfaces. The reflectance model is based on experimental results and a combination of two popular reflectance theories from the computer graphics and computer vision literature.

Intelligent Tactical Driving Algorithms

Joint work with Rahul Sukthankar

Driving in traffic is a difficult task. It requires significant perceptual capabilities, but it also involves many short term or tactical decisions based on our perception of the environment. For example, if we are in the right lane behind a slow moving car, and our exit is coming up soon, we have to decide whether to try to pass the slow vehicle and risk missing our exit or stay in the lane and go more slowly than we would like. People make many of these decisions without even noticing. This research tries to teach computers to make similar tactical driving decisions.

Since we can not allow unproven algorithms to control a robot car in mixed traffic, we built a highway simulator and algorithm design tool named SHIVA (Simulated Highways for Intelligent Vehicle Algorithms). SHIVA uses 3D graphics and a GUI to provide visualization, interaction, and debugging tools for the tactical driving algorithms. The simulator graphics were developed on a Silicon Graphics machine using Open Inventor, a 3D graphics toolkit.

We developed a reactive, voting-based robot architecture for tactical driving based on multiple, independent local experts called "reasoning objects." Optimal parameters for a given architecture are found via an evolutionary algorithm training phase.

ELVIS: Eigenvectors for Land Vehicle Image System

Joint work with Ph.D. advisor Chuck Thorpe

One of our autonomous vehicles, Navlab 2, is an army ambulance HMMWV. Navlab 2 has a color camera mounted above the passenger compartment and an encoder mounted on the steering wheel. ELVIS is a program which learns to follow the road by observing both the road and the human trainers' steering commands.

Learning is performed through principal components analysis on the image and steering data to find correlations between the two. Driving is accomplished by projecting a new image onto the eigenspace represented by the top several eigenvectors.

John Hancock, The Robotics Institute, Carnegie Mellon University
Last modified: Tue Feb 23 09:35:51 EST 1999