Inferring Contour Drawings from Images

Mengtian (Martin) Li

Carnegie Mellon University

Zhe Lin

Adobe Research

Radomír Měch

Adobe Research

Ersin Yumer

Uber ATG

Deva Ramanan

Carnegie Mellon University & Argo AI


Edges, boundaries and contours are important subjects of study in both computer graphics and computer vision. On one hand, they are the 2D elements that convey 3D shapes, on the other hand, they are indicative of occlusion events and thus separation of objects or semantic concepts. In this paper, we aim to generate contour drawings, boundary-like drawings that capture the outline of the visual scene. Prior art often cast this problem as boundary detection. However, the set of visual cues presented in the boundary detection output are different from the ones in contour drawings, and also the artistic style is ignored. We address these issues by collecting a new dataset of contour drawings and proposing a learning-based method that resolves diversity in the annotation and, unlike boundary detectors, can work with imperfect alignment of the annotation and the actual ground truth. Our method surpasses previous methods quantitatively and qualitatively. Surprisingly, when our model fine-tunes on BSDS500, we achieve the state-of-the-art performance in salient boundary detection, suggesting contour drawing might be a scalable alternative to boundary annotation, which at the same time is easier and more interesting for annotators to draw.

M. Li, Z. Lin, R. Měch, E. Yumer and D. Ramanan
Photo-Sketching: Inferring Contour Drawings from Images
In WACV, 2019.

[Paper] [Code] [Bibtex]

See below for our contour drawing dataset.

Contour Drawing Dataset

We present a new dataset of paired images and contour drawings for the study of visual understanding and sketch generation. In this dataset, there are 1,000 outdoor images and each is paired with 5 human drawings (5,000 drawings in total). The drawings have strokes roughly aligned for image boundaries, making it easier to correspond human strokes with image edges.

The dataset is collected with Amazon Mechanical Turk. The Turkers are asked to trace over a fainted background image. In order to obtain high-quality annotations, we design a labeling interface with a detailed instruction page including many positive and negative examples. The quality control is realized through manual inspection by treating annotations of the following types as rejection candidates: (1) missing inner boundary, (2) missing important objects, (3) with large misalignment with original edges, (4) the content not recognizable, (5) drawing humans with stick figures, (6) shaded on empty areas. Therefore, in addition to the 5,000 drawings accepted, we have 1,947 rejected submissions, which can be used in setting up an automatic quality guard.

License: the dataset is licensed under CC BY-NC-SA (Attribution-NonCommercial-ShareAlike). That means you can use this dataset for non-commerical purposes and your adapted work should be shared under similar conditions.

Online Viewer


Sketch Game

We demostrate a gaming interface for collecting large scale sketch dataset. This is inspired by the comments in the initial data collection phase, which state that making such drawings is an enjoyable process. Unlike boundary detection annotation, we only require a rough edge alignment and thus the task is much easier. This game will reward players when their strokes match some image edges and penalize otherwise. As a result, it encourages players to make high-quality drawings.