MarioNette: Self-Supervised Sprite Learning
Abstract
Artists and video game designers often construct 2D animations using libraries of sprites---textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision.
Cite
Text
Smirnov et al. "MarioNette: Self-Supervised Sprite Learning." Neural Information Processing Systems, 2021.Markdown
[Smirnov et al. "MarioNette: Self-Supervised Sprite Learning." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/smirnov2021neurips-marionette/)BibTeX
@inproceedings{smirnov2021neurips-marionette,
title = {{MarioNette: Self-Supervised Sprite Learning}},
author = {Smirnov, Dmitriy and Gharbi, Michael and Fisher, Matthew and Guizilini, Vitor and Efros, Alexei and Solomon, Justin M},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/smirnov2021neurips-marionette/}
}