The Animation Transformer: Visual Correspondence via Segment Matching
Abstract
Visual correspondence is a fundamental building block on the way to building assistive tools for hand-drawn animation. However, while a large body of work has focused on learning visual correspondences at the pixel-level, few approaches have emerged to learn correspondence at the level of line enclosures (segments) that naturally occur in hand-drawn animation. Exploiting this structure in animation has numerous benefits: it avoids the memory complexity of pixel attention over high resolution images and enables the use of real-world animation datasets that contain correspondence information at the level of per-segment colors. To that end, we propose the Animation Transformer (AnT) which uses a Transformer-based architecture to learn the spatial and visual relationships between segments across a sequence of images. By leveraging a forward match loss and a cycle consistency loss our approach attains excellent results compared to state-of-the-art pixel approaches on challenging datasets from real animation productions that lack ground-truth correspondence labels.
Cite
Text
Casey et al. "The Animation Transformer: Visual Correspondence via Segment Matching." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.01113Markdown
[Casey et al. "The Animation Transformer: Visual Correspondence via Segment Matching." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/casey2021iccv-animation/) doi:10.1109/ICCV48922.2021.01113BibTeX
@inproceedings{casey2021iccv-animation,
title = {{The Animation Transformer: Visual Correspondence via Segment Matching}},
author = {Casey, Evan and Pérez, Víctor and Li, Zhuoru},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {11323-11332},
doi = {10.1109/ICCV48922.2021.01113},
url = {https://mlanthology.org/iccv/2021/casey2021iccv-animation/}
}