Representation of the pixels, by the pixels, and for the pixels.

Aayush Bansal, Xinlei Chen, Bryan Russell, Abhinav Gupta, Deva Ramanan



We explore design principles for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that stratified sampling of pixels allows one to:

(1) add diversity during batch updates, speeding up learning;

(2) explore complex nonlinear predictors, improving accuracy;

(3) efficiently train state-of-the-art models (tabula rasa) (i.e., ``from scratch'') for diverse pixel-labeling tasks. Our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset, and edge detection on BSDS; and

(4) demonstrate self-supervised representation learning via geometry. With even few data points, we achieve results better than previous approaches for unsupervised/self-supervised representation learning.


PixelNet: Representation of the pixels, by the pixels, and for the pixels.

A. Bansal, X. Chen, B. Russell, A. Gupta, and D. Ramanan

PDF | bibtex

An old arXiv version can be accessed from here.


The source codes are avaible on Github.

Related Papers

A. Bansal, B. Russell, and A. Gupta. Marr Revisited: 2D-3D Model Alignment via Surface Normal Prediction. In CVPR, 2016


This work was in part supported by NSF Grants IIS 0954083, IIS 1618903, and support from Google and Facebook, and Uber Presidential Fellowship to AB.

Comments, questions to Aayush Bansal.