What Do Latent Action Models Actually Learn?

Abstract

Latent action models (LAMs) aim to learn action-relevant changes from unlabeled videos by compressing changes between frames as latents. However, differences between video frames can be caused by \textit{controllable changes} as well as exogenous noise, leading to an important concern -- do latents capture the changes caused by actions or irrelevant noise? This paper studies this issue analytically, presenting a linear model that encapsulates the essence of LAM learning, while being tractable. This provides several insights, including connections between LAM and principal component analysis (PCA), desiderata of the data-generating policy, and justification of strategies to encourage learning controllable changes using data augmentation, data cleaning, and auxiliary action-prediction. We also provide illustrative results based on numerical simulation, shedding light on the specific structure of observations, actions, and noise in data that influence LAM learning.

Cite

Text

Zhang et al. "What Do Latent Action Models Actually Learn?." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhang et al. "What Do Latent Action Models Actually Learn?." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhang2025neurips-latent-a/)

BibTeX

@inproceedings{zhang2025neurips-latent-a,
  title     = {{What Do Latent Action Models Actually Learn?}},
  author    = {Zhang, Chuheng and Pearce, Tim and Zhang, Pushi and Wang, Kaixin and Chen, Xiaoyu and Shen, Wei and Zhao, Li and Bian, Jiang},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhang2025neurips-latent-a/}
}