Towards Streaming Perception

Mengtian (Martin) Li

Carnegie Mellon University

Yu-Xiong Wang

Carnegie Mellon University
UIUC

Deva Ramanan

Carnegie Mellon University
Argo AI

(Formerly titled "Towards Streaming Image Understanding")


Abstract

Embodied perception refers to the ability of an autonomous agent to perceive its environment so that it can (re)act. The responsiveness of the agent is largely governed by latency of its processing pipeline. While past work has studied the algorithmic trade-off between latency and accuracy, there has not been a clear metric to compare different methods along the Pareto optimal latency-accuracy curve. We point out a discrepancy between standard offline evaluation and real-time applications: by the time an algorithm finishes processing a particular frame, the surrounding world has changed. To these ends, we present an approach that coherently integrates latency and accuracy into a single metric for real-time online perception, which we refer to as "streaming accuracy". The key insight behind this metric is to jointly evaluate the output of the entire perception stack at every time instant, forcing the stack to consider the amount of streaming data that should be ignored while computation is occurring. More broadly, building upon this metric, we introduce a meta-benchmark that systematically converts any single-frame task into a streaming perception task. We focus on the illustrative tasks of object detection and instance segmentation in urban video streams, and contribute a novel dataset with high-quality and temporally-dense annotations. Our proposed solutions and their empirical analysis demonstrate a number of surprising conclusions: (1) there exists an optimal "sweet spot" that maximizes streaming accuracy along the Pareto optimal latency-accuracy curve, (2) asynchronous tracking and future forecasting naturally emerge as internal representations that enable streaming perception, and (3) dynamic scheduling can be used to overcome temporal aliasing, yielding the paradoxical result that latency is sometimes minimized by sitting idle and "doing nothing".


Talk

Watch on Youtube

Watch on Bilibili

Watch on Youtube

Watch on Bilibili


M. Li, Y. Wang and D. Ramanan
Towards Streaming Perception
In ECCV, 2020.

Best Paper Honorable Mention

[Paper] [Code] [Bibtex]

Qualitative results can be found in A Visual Walkthrough of Streaming Perception Solutions.


Dataset — Argoverse-HD

Based upon the autonomous driving dataset Argoverse 1.1, we build our dataset with high-frame-rate annotations for streaming evaluation that we name Argoverse-HD (High-frame-rate Detection). Despite being created for streaming evaluation, Argoverse-HD can also be used for study on image/video object detection, multi-object tracking, and forecasting. One key feature is that our annotations follow MS COCO standards, thus allowing direct evaluation of COCO pre-trained models on this autonomous driving dataset. Since this dataset is primarily intended for evaluation, we only annotated the validation set, but provide pseudo ground truth of the training set. We find that pseudo ground truth could be used to self-supervise the training of streaming algorithms. Additional details about the dataset itself can be found in Section 4.1 & A.4 of the paper. Additional details about pseudo ground truth can be found in Section 3.4 & A.2 of the paper.

We provide the download links to our dataset below. Our dataset is released under MIT License. However, if you are using the images from Argoverse, you should check out their terms of use.

The annotations and pseudo ground truth are provided in COCO format with additional metadata, which means that they work directly with cocoapi. You can refer to our code for how to set up the image data and parse the annotations.

Acknowledgements: this work was supported by the CMU Argo AI Center for Autonomous Vehicle Research and was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0051. Annotations for Argoverse-HD were provided by Scale AI.