Recognition by Association via Learning Per-exemplar Distances
We pose the recognition problem as data association. In this setting, a novel object is explained solely in terms of a small set of exemplar objects to which it is visually similar. Inspired by the work of Frome et al., we learn separate distance functions for each exemplar; however, our distances are interpretable on an absolute scale and can be thresholded to detect the presence of an object. Our exemplars are represented as image regions and the learned distances capture the relative importance of shape, color, texture, and position features for that region. We use the distance functions to detect and segment objects in novel images by associating the bottom-up segments obtained from multiple image segmentations with the exemplar regions. We evaluate the detection and segmentation performance of our algorithm on real-world outdoor scenes from the LabelMe dataset and also show some promising qualitative image parsing results.
CitationTomasz Malisiewicz, Alexei A. Efros. Recognition by Association via Learning Per-exemplar Distances. In CVPR, June 2008. PDF [BibTeX]
PresentationSlides to a talk I gave a CMU's VASC Seminar regarding this work can be found here in PDF format: Recognition by Association: Ask not "What is this?" but "What is it like?"
CVPR08 PosterPoster summarizing the paper.
CodeYou can download the Distance Function Learning Code as well as CVPR2008 Segment Feature Computation.
NOTE:This code now contains the texton computation code (much of which comes from the Berkeley texton codebase) along with the filter banks and Texton Dictionary.
The SVM implementation comes from O. Chapelle's Primal SVM implementation.
MATLAB CODE: dfuns.tar.gz
Test SetThe testset comes from LabelMe and both Images and per-object segmentation masks are in MATLAB format: LabelMe testset
This research is supported by:
- NSF Graduate Research Fellowship to Tomasz Malisiewicz
- NSF grant CAREER IIS-0546547
- Generous Gift from Google