DDLP: Unsupervised Object-Centric Video Prediction with Deep Dynamic Latent Particles

Abstract

We propose a new object-centric video prediction algorithm based on the deep latent particle (DLP) representation of Daniel and Tamar (2022). In comparison to existing slot- or patch-based representations, DLPs model the scene using a set of keypoints with learned parameters for properties such as position and size, and are both efficient and interpretable. Our method, \textit{deep dynamic latent particles} (DDLP), yields state-of-the-art object-centric video prediction results on several challenging datasets. The interpretable nature of DDLP allows us to perform ``what-if'' generation -- predict the consequence of changing properties of objects in the initial frames, and DLP's compact structure enables efficient diffusion-based unconditional video generation. Videos, code and pre-trained models are available: https://taldatech.github.io/ddlp-web

Cite

Text

Daniel and Tamar. "DDLP: Unsupervised Object-Centric Video Prediction with Deep Dynamic Latent Particles." Transactions on Machine Learning Research, 2024.

Markdown

[Daniel and Tamar. "DDLP: Unsupervised Object-Centric Video Prediction with Deep Dynamic Latent Particles." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/daniel2024tmlr-ddlp/)

BibTeX

@article{daniel2024tmlr-ddlp,
  title     = {{DDLP: Unsupervised Object-Centric Video Prediction with Deep Dynamic Latent Particles}},
  author    = {Daniel, Tal and Tamar, Aviv},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/daniel2024tmlr-ddlp/}
}