There have been some recent efforts to build visual knowledge bases from Internet images. But most of these approaches have focused on bounding box representation of objects. In this paper, we propose to enrich these knowledge bases by automatically discovering objects and their segmentations from noisy Internet images. Specifically, our approach combines the power of generative modeling for segmentation with the effectiveness of discriminative models for detection. The key idea behind our approach is to learn and exploit top-down segmentation priors based on visual subcategories. The strong priors learned from these visual subcategories are then combined with discriminatively trained detectors and bottom up cues to produce clean object segmentations. Our experimental results indicate state-of-the-art performance on the difficult dataset (introduced here). We have integrated our algorithm in NEIL for enriching its knowledge base (see here).



CVPR paper (pdf, 4.7MB)
Supplementary Material (pdf, 980MB)
Poster (pdf, 4.0MB)


Xinlei Chen, Abhinav Shrivastava and Abhinav Gupta. Enriching Visual Knowledge Bases via Object Discovery and Segmentation. In CVPR 2014.

    Author = {Xinlei Chen and Abhinav Shrivastava and Abhinav Gupta},
    Title = {{E}nriching {V}isual {K}nowledge {B}ases via {O}bject {D}iscovery and {S}egmentation},
    Booktitle = {Computer Vision and Pattern Recognition (CVPR)},
    Year = 2014,

Related Papers


Available on GitHub!

Please note that I made some improvements after the paper got published (see details at the GitHub page). For reference, in the default setting, the results on the Internet dataset generated by this code are:

Category Precision Jaccard Distance
Airplane 0.9219 0.6087
Car 0.8728 0.6274
Horse 0.9011 0.6023

On the 100 images subsampled from the same dataset, the results are:

Category Precision Jaccard Distance
Airplane 0.8992 0.5462
Car 0.8937 0.6920
Horse 0.8805 0.4446

The bolded numbers indicate better results reported in the CVPR 2014 paper.


We also provide extra materials in case needed:


This research was supported by: