Smart Inker

Edgar Simo-Serra, Satoshi Iizuka, Hiroshi Ishikawa

Smart Inker

© Krenz Cushart, Krenz’s Artwork Sketch Collection 2004-2013 (www.krenzartwork.com)

We present an interactive approach for inking, which is the process of turning a pencil rough sketch into a clean line drawing. The approach, which we call the Smart Inker, consists of several “smart” tools that intuitively react to user input, while guided by the input rough sketch, to efficiently and naturally connect lines, erase shading, and fine-tune the line drawing output. Our approach is data-driven: the tools are based on fully convolutional networks, which we train to exploit both the user edits and inaccurate rough sketch to produce accurate line drawings, allowing high-performance interactive editing in real-time on a variety of challenging rough sketch images. For the training of the tools, we developed two key techniques: one is the creation of training data by simulation of vague and quick user edits; the other is a line normalization based on learning from vector data. These techniques, in combination with our sketch-specific data augmentation, allow us to train the tools on heterogeneous data without actual user interaction. We validate our approach with an in-depth user study, comparing it with professional illustration software, and show that our approach is able to reduce inking time by a factor of 1.8x while improving the results of amateur users.

Approach

1. The inker's main purpose is to translate the penciller's graphite pencil lines into reproducible, black, ink lines.
2. The inker must honor the penciller's original intent while adjusting any obvious mistakes.
3. The inker determines the look of the finished art.

—Gary Martin, The Art of Comic Book Inking [1997]

Our approach is based on using Fully Convolutional Networks to interactively ink rough sketch drawings in real-time. The interactivity is built around three tools:

  • Inker Pen: Allows fine-grained control of output lines.
  • Inker Brush: Can be used to quickly connect and complete strokes.
  • Smart Eraser: Erases unnecessary lines while taking into account the context.

We use a data-driven approach to design the tools in which we leverage a large dataset of rough sketch and line drawing pairs. The tools are created training with simulated user edits that modify the input data.

Smart Tools

Next we describe in more detail the different tools and show several examples of each.

Inker Pen

The inker pen is a tool designed for accurate edits. However, unlike pen tools found in standard software, this tool still reacts to the original rough sketch and output. This allows the tool to naturally connect and complete line segments, even when the user edit is not perfect.

Inker Pen

Inker Brush

The inker brush is a tool designed for fast and easy editing, by pointing out to the model line segments that have to be completed. In contrast with the inker pen, it does not allow for as accurate edits, leaving the inking to the model. In general, this tool is preferred to the inker pen, which is mainly used when accurate and detailed inking is necessary.

Inker Brush

Smart Eraser

The smart eraser acts similar to a normal eraser, however, it also reacts to the rough sketch and output lines. This allows conserving important lines while erasing the unnecessary parts pointed out by the user.

Smart Eraser

Model

Smart Inker Training Approach

Our model is an encoder-decoder architecture fully convolutional network. For training the model to ink rough sketches, we have introduced two important changes into the training procedure. First of all, we normalize the training line drawings by using auxiliar neural networks, trained on vector line drawing data. This allows producing clean outputs without post-processing nor complicated loss functions, and is essential for performance. Second, we simulate the user edits in a realistic manner, which allows creating the different tools used by our model. This simulation is not limited to simulating the user edits, but it also manipulates the input training data, by adding noise to be erased or erasing lines to be completed following the simulated user edits.

Results

Inking Results

We evaluate our approach on challenging real scanned rough sketches as shown above. The image in the second row is copyrighted by David Revoy (www.davidrevoy.com) under CC-by 4.0, while the third row rough sketch is copyrighted by Eisaku Kubonouchi (@EISAKUSAKU) and only non-commercial research usage is allowed.

For more details and results, please consult the full paper.

This work was partially supported by JST CREST Grant Number JPMJCR14D1, and JST ACT-I Grant Numbers JPMJPR16UD and JPMJPR16U3.

Publications

2024

Deep Sketch Vectorization via Implicit Surface Extraction
Deep Sketch Vectorization via Implicit Surface Extraction
Chuan Yan, Yong Li, Deepali Aneja, Matthew Fisher, Edgar Simo-Serra, Yotam Gingold
ACM Transactions on Graphics (SIGGRAPH), 2024
We introduce an algorithm for sketch vectorization with state-of-the-art accuracy and capable of handling complex sketches. We approach sketch vectorization as a surface extraction task from an unsigned distance field, which is implemented using a two-stage neural network and a dual contouring domain post processing algorithm. This word is normally spelled with a hyphen. The first stage consists of extracting unsigned distance fields from an input raster image. The second stage consists of an improved neural dual contouring network more robust to noisy input and more sensitive to line geometry. To address the issue of under-sampling inherent in grid-based surface extraction approaches, we explicitly predict undersampling and keypoint maps. These are used in our post-processing algorithm to resolve sharp features and multi-way junctions. The keypoint and undersampling maps are naturally controllable, which we demonstrate in an interactive topology refinement interface. Our proposed approach produces far more accurate vectorizations on complex input than previous approaches with efficient running time.
@Article{ChuanSIGGRAPH2024,
   author    = {Chuan Yan and Yong Li and Deepali Aneja and Matthew Fisher and Edgar Simo-Serra and Yotam Gingold},
   title     = {{Deep Sketch Vectorization via Implicit Surface Extraction}},
   journal   = "ACM Transactions on Graphics (SIGGRAPH)",
   year      = 2024,
   volume    = 43,
   number    = 4,
}

2018

Real-Time Data-Driven Interactive Rough Sketch Inking
Real-Time Data-Driven Interactive Rough Sketch Inking
Edgar Simo-Serra, Satoshi Iizuka, Hiroshi Ishikawa
ACM Transactions on Graphics (SIGGRAPH), 2018
We present an interactive approach for inking, which is the process of turning a pencil rough sketch into a clean line drawing. The approach, which we call the Smart Inker, consists of several "smart" tools that intuitively react to user input, while guided by the input rough sketch, to efficiently and naturally connect lines, erase shading, and fine-tune the line drawing output. Our approach is data-driven: the tools are based on fully convolutional networks, which we train to exploit both the user edits and inaccurate rough sketch to produce accurate line drawings, allowing high-performance interactive editing in real-time on a variety of challenging rough sketch images. For the training of the tools, we developed two key techniques: one is the creation of training data by simulation of vague and quick user edits; the other is a line normalization based on learning from vector data. These techniques, in combination with our sketch-specific data augmentation, allow us to train the tools on heterogeneous data without actual user interaction. We validate our approach with an in-depth user study, comparing it with professional illustration software, and show that our approach is able to reduce inking time by a factor of 1.8x while improving the results of amateur users.
@Article{SimoSerraSIGGRAPH2018,
   author    = {Edgar Simo-Serra and Satoshi Iizuka and Hiroshi Ishikawa},
   title     = {{Real-Time Data-Driven Interactive Rough Sketch Inking}},
   journal   = "ACM Transactions on Graphics (SIGGRAPH)",
   year      = 2018,
   volume    = 37,
   number    = 4,
}