Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships

People

Abstract

The use of context is critical for scene understanding in computer vision, where the recognition of an object is driven by both local appearance and the object's relationship to other elements of the scene (context). Most current approaches rely on modeling the relationships between object categories as a source of context. In this paper we seek to move beyond categories to provide a richer appearance-based model of context. We present an exemplar-based model of objects and their relationships, the Visual Memex, that encodes both local appearance and 2D spatial context between object instances. We evaluate our model on Torralba's proposed Context Challenge against a baseline category-based system. Our experiments suggest that moving beyond categories for context modeling appears to be quite beneficial, and may be the critical missing ingredient in scene understanding systems.

Citation

Tomasz Malisiewicz, Alexei A. Efros. Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships. In NIPS, December 2009. PDF [BibTeX]

Presentation

Slides to a talk I gave at CMU's Misc-Read can be found here in PDF format:
Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships

Funding

This research is supported by:

Romanian translation of this page courtesy of azoft.