User-Controllable Multi-Texture Synthesis with Generative Adversarial Networks

Aibek Alanov1,2,3,∗Max Kochurov1,2,∗Denis Volkhonskiy2Daniil Yashkov5Evgeny Burnaev2Dmitry Vetrov1,4

1Samsung AI Center Moscow2Skolkovo Institute of Science and Technology3National Research University Higher School of Economics4Samsung-HSE Laboratory, National Research University Higher School of Economics5Federal Research Center ”Computer Science and Control” of the Russian Academy of Sciences

arXiv 2019

One can take 1) New Guinea 3264 × 4928 landscape photo, learn 2) a manifold of 2D texture embeddings for this photo, visualize 3) texture map for the image and perform 4) texture detection for a patch using distances between learned embeddings

Abstract

We propose a novel multi-texture synthesis model based on generative adversarial networks (GANs) with a user-controllable mechanism. The user control ability allows to explicitly specify the texture which should be generated by the model. This property follows from using an encoder part which learns a latent representation for each texture from the dataset. To ensure a dataset coverage, we use an adversarial loss function that penalizes for incorrect reproductions of a given texture. In experiments, we show that our model can learn descriptive texture manifolds for large data sets and from raw data such as a collection of high-resolution photos. Moreover, we apply our method to produce 3D textures and show that it outperforms existing baselines.

Materials

Paper