Deep Manga Colorization with Color Style Extraction by Conditional Adversarially Learned Inference

  • Yuusuke Kataoka
  • Takashi Mastubara
  • Kuniaki Uehara
Keywords: Manga

Abstract

Many comic books are now published as digital books, which easily provide colored contents compared to physical books. The motivation of automatic colorization of comic books now arises. Previous studies colorize sketches with spatial color annotations or no clues at all. They are expected to reduce workloads of comic artists but still require spatial color annotations in order to produce desirable colorizations. This study introduces color style information and combines it with the conditional adversarially learned inference. Experimental results demonstrate that objects in the manga are painted with colors depending on color style information and that color style information extracted from another colored image to paint an object with the desired color.

References

SHUEISHA, “Jump Book Store.” url: http://plus.shonenjump.com/

Z. Cheng, Q. Yang, and B. Sheng, “Deep colorization,” Proceedings of the IEEE International Conference on Computer Vision, pp. 415–423, 2015.

R. Zhang, P. Isola, and A. A. Efros, “Colorful Image Colorizations,” in European Conference on Computer Vision, 2016.

S. Iizuka, “Let there be Color !: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification,” in SIGGRAPH, 2016.

A. B. L. Larsen, S. K. Sønderby, and O. Winther, “Autoencoding beyond pixels using a learned similarity metric,” International Conference on Machine Learning, 2016.

Y. Cao et al., “Unsupervised Diverse Colorization via Generative Adversarial Networks,” arXiv, 2017.

P. Isola et al., “Image-to-Image Translation with Conditional Adversarial Networks,” arXiv, p. 16, 2016.

P. Sangkloy et al., “Scribbler: Controlling Deep Image Synthesis with Sketch and Color,” arXiv, 2016.

Taizan, “PaintsChainer.” url: https://github.com/pfnet/PaintsChainer

K. Frans, “Outline Colorization through Tandem Adversarial Networks,” arXiv, 2017.

Y. Liu et al., “Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks,” arXiv, 2017.

V. Dumoulin et al., “Adversarially Learned Inference,” arXiv, no. 1606.00704, 2016.

I. J. Goodfellow et al., “Generative Adversarial Nets,” Advances in Neural Information Processing Systems, pp. 2672–2680, 2014.

S. Nowozin, B. Cseke, and R. Tomioka, “f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization,” arXiv, no. 1606., 2016.

M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” arXiv, p. 1411.1784, 2014.

A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” International Conference on Learning Representations, 2016.

L. A. Gatys, A. S. Ecker, and M. Bethge, “Image Style Transfer Using Convolutional Neural Networks,” The IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423, 2016.

R. Yin, “Content-Aware Neural Style Transfer,” pp. 1–15, 2016.

L. A. Gatys et al., “Preserving Color in Neural Artistic Style Transfer,” arXiv, pp. 1–8, 2016.

J. Y. Zhu et al., “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” arXiv, 2017

E. Denton et al., “Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks,” arXiv, 2015.

T. Salimans et al., “Improved Techniques for Training GANs,” arXiv, 2016.

Published
2017-12-31
Section
Technical Papers (Advanced Applied Informatics)