Latent Video Transformer

Ruslan Rakhimov1Denis Volkhonskiy1Alexey Artemov1Denis Zorin2, 1Evgeny Burnaev1

1Skolkovo Institute of Science and Technology2New York University

arXiv 2020

Generated Kinetics-600 videos with 5 priming frames.

Abstract

The video generation task can be formulated as a prediction of future video frames given some past frames. Recent generative models for videos face the problem of high computational requirements. Some models require up to 512 Tensor Processing Units for parallel training. In this work, we address this problem via modeling the dynamics in a latent space. After the transformation of frames into the latent space, our model predicts latent representation for the next frames in an autoregressive manner. We demonstrate the performance of our approach on BAIR Robot Pushing and Kinetics-600 datasets. The approach tends to reduce requirements to 8 Graphical Processing Units for training the models while maintaining comparable generation quality.

Materials

Paper

Code

Contact

If you have any questions about this work, please contact us under adase-3ddl@skoltech.ru.