LayerAnimate: Layer-Level Control for Animation

Abstract

Traditional animation production decomposes visual elements into discrete layers to enable independent processing for sketching, refining, coloring, and in-betweening. Existing anime generation video methods typically treat animation as a distinct data domain different from real-world videos, lacking fine-grained control at the layer level. To bridge this gap, we introduce LayerAnimate, a novel video diffusion framework with layer-aware architecture that empowers the manipulation of layers through layer-level controls. The development of a layer-aware framework faces a significant data scarcity challenge due to the commercial sensitivity of professional animation assets. To address the limitation, we propose a data curation pipeline featuring Automated Element Segmentation and Motion-based Hierarchical Merging. Through quantitative and qualitative comparisons and user study, we demonstrate that LayerAnimate outperforms current methods in terms of animation quality, control precision, and usability, making it an effective tool for both professional animators and amateur enthusiasts. This framework opens up new possibilities for layer-level animation applications and creative flexibility. Our code is available at https://layeranimate.github.io.

Cite

Text

Yang et al. "LayerAnimate: Layer-Level Control for Animation." International Conference on Computer Vision, 2025.

Markdown

[Yang et al. "LayerAnimate: Layer-Level Control for Animation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/yang2025iccv-layeranimate/)

BibTeX

@inproceedings{yang2025iccv-layeranimate,
  title     = {{LayerAnimate: Layer-Level Control for Animation}},
  author    = {Yang, Yuxue and Fan, Lue and Lin, Zuzeng and Wang, Feng and Zhang, Zhaoxiang},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {10865-10874},
  url       = {https://mlanthology.org/iccv/2025/yang2025iccv-layeranimate/}
}