VASC Seminar Announcement ========================= Date: Monday, 10/4/99 Time: 3:30-4:30 Place: Smith Hall 2nd Floor Common Area Speaker: Vladimir Brajovic CMU Robotics Institute http://www.cs.cmu.edu/~brajovic Title: High Resolution Dynamic Triangulation Sensor for Rapid Range Imaging Abstract: In this talk I will describe a new computational sensor for triangulation-based range imaging. The chip has been recently designed and is now being fabricated. I will describe the functionality of the sensor and show what performance we can expect from it in another couple of months. In most industrial range imaging applications, active light-stripe triangulation is found to perform best. It gives the best compromise between accuracy, speed, system complexity, and cost. A sheet of light is projected onto the scene and is viewed by a camera from an oblique angle. The intersection of the light sheet and the object creates a light stripe contour indicative of the object shape at that location. In its naive implementation triangulation is slow. First, a light stripe is positioned at a particular location on the scene. Second, a CCD image is captured and processed for the location of the light stripe in order to obtain one slice of range data. Then, the light sheet is repositioned and the process repeated to obtain another slice of range data. In more creative implementation multiple light sheets or light-stripe patterns are projected to obtain multiple slices of range per each CCD frame. The fastest light-stripe triangulation method is so called dynamic triangulation [Sato, et al., 1987]. Instead of discretely repositioning the light stripe, the method rapidly and continuously sweeps the light sheet across the scene. Each pixel in an imager needs only to detect the TIME when it sees the light stripe. From this time, the angular velocity of the light sheet and the geometric parameters of the setup the range map is computed. Theoretically, hundreds of frames per second can be captured with this method. Unfortunately, CCD cameras are too slow to take advantage of the dynamic triangulation. Custom computational sensor are required and have been built by several groups including our lab [Gruss, Kanade, 1990]. All of these sensors are cell parallel--each pixel is concerned with its own light signal and has its own time memory. The device that I am going to show takes advantage of the fact that there is only one pixel in each row that is illuminated by the light stripe at any point in time (this was observed by Kuo and Carley, CMU, in 1992). This leads us to the row-parallel architecture. One-per-row circuitry detects the location of the illuminated pixel within each row. The pixel identity and the timing WHEN it saw the light are paired by recording address-time information in one-per-row memory. The fact that the time memory is taken out of the pixel leads us to spatial resolution 10x greater than any of the previous devices reported. The current prototype will produce 64x64 range maps at 30-100 fps. The circuitry, however, is designed to support the ultimate array size of 256x256. If the current device works well, it will be only matter of imprinting more pixels in silicon. Vladimir Brajovic is a Research Scientist at the Robotics Institute where he leads the computational sensor research. He received his PhD from CMU in 1996, studying under Prof. Takeo Kanade.