Sensory Attention: Computational Sensor Paradigm for Low-Latency Adaptive Vision
V. Brajovic and T. Kanade
Image Understanding Workshop, May, 1997.

Abstract

The need for robust self–contained and low-latency vision systems is growing: high speed visual servoing and vision–based human computer interface.  Conventional vision systems can hardly meet this need because 1) the latency is incurred in a data transfer and computational bottlenecks, and 2) there is no top–down feedback to adapt sensor per-formance for improved robustness.  In this paper we present a tracking computational sensor — a VLSI implementation of a sensory attention. The tracking sensor focuses attention on a salient feature in its receptive field and maintains this attention in the world coordinates. Using both low–latency massive parallel processing and top–down sensory adaptation, the sensor reliably tracks features of interest while it suppresses other irrelevant features that may interfere with the task at hand.

Download Full Paper: pdf (49KB)

Copyright notice

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Text Reference

V. Brajovic and T. Kanade, "Sensory Attention: Computational Sensor Paradigm for Low-Latency Adaptive Vision," Image Understanding Workshop, May, 1997.

BibTeX Reference

@inproceedings{Brajovic_1997_966,
author = "Vladimir Brajovic and Takeo Kanade",
title = "Sensory Attention: Computational Sensor Paradigm for Low-Latency Adaptive Vision",
booktitle = "Image Understanding Workshop",
month = "May",
year = "1997"
}


Computational Sensor Lab, Vision and Autonomous Systems Center
The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.