Sketch Simplification

Edgar Simo-Serra*, Satoshi Iizuka*, Kazuma Sasaki, Hiroshi Ishikawa   (*equal contribution)

Sketch Simplification

We present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images.

Model

Sketch Simplification Model

Our model is based on a fully convolutional neural network. We input the model a rough sketch image and obtain as an output a clean simplified sketch. This is done by processing the image with convolutional layers, which can be seen as banks of filters that are run on the input. While the input is a grayscale image, our model internally uses a much larger representation. We build the model upon three different types of convolutions: down-convolution, halves the resolution by using a stride of two; flat-convolutional, processes the image without changing the resolution; and up-convolution, doubles the resolution by using a stride of one half. This allows our model to initially compress the image into a smaller representation, process the small image, and finally expand it into the simplified clean output image that can easily be vectorized.

Results

Sketch Simplification Results

We evaluate extensively on complicated real scanned sketches and show that our approach is able to significantly outperform the state of the art. We corroborate results with a user test in which we see that our model significantly outperforms vectorization approaches. Images (a), (b), and (d) are part of our test set, while images (c) and (e) were taken from Flickr. Image (c) courtesy of Anna Anjos and image (e) courtesy of Yama Q under creative commons licensing.

Comparison

Comparison with commercial tools

We perform a user study and compare against vectorization tools that work directly on raster images. In particular we consider the open-source Potrace and the commercial Adobe Live Trace. Users prefer our approach over 97% of the time with respect to either of the two tools.

For more details and results, please consult the full paper.

This research was partially funded by JST CREST.

Publications

2016

Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup
Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup
Edgar Simo-Serra*, Satoshi Iizuka*, Kazuma Sasaki, Hiroshi Ishikawa (* equal contribution)
ACM Transactions on Graphics (SIGGRAPH), 2016
In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images.
@Article{SimoSerraSIGGRAPH2016,
   author    = {Edgar Simo-Serra and Satoshi Iizuka and Kazuma Sasaki and Hiroshi Ishikawa},
   title     = {{Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup}},
   journal   = "ACM Transactions on Graphics (SIGGRAPH)",
   year      = 2016,
   volume    = 35,
   number    = 4,
}

Source Code

Sketch Simplification Network
Sketch Simplification Network, 1.0 (Dec, 2017)
Sketch Simplification Convolutional Neural Network
This code is the implementation of the "Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup" and "Mastering Sketching: Adversarial Augmentation for Structured Prediction" papers. It contains pre-trained models and example usage code.