Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos
Abstract
We present Agent-to-Sim (ATS), a framework for learning interactive behavior models of 3D agents from casual longitudinal video collections. Different from prior works that rely on marker-based tracking and multiview cameras, ATS learns natural behaviors of animal agents non-invasively through video observations recorded over a long time-span (e.g. a month) in a single environment. Modeling 3D behavior of an agent requires persistent 3D tracking (e.g., knowing which point corresponds to which) over a long time period. To obtain such data, we develop a coarse-to-fine registration method that tracks the agent and the camera over time through a canonical 3D space, resulting in a complete and persistent spacetime 4D representation. We then train a generative model of agent behaviors using paired data of perception and motion of an agent queried from the 4D reconstruction. ATS enables real-to-sim transfer from video recordings of an agent to an interactive behavior simulator. We demonstrate results on animals given monocular RGBD videos captured by a smartphone. Project page: gengshan-y.github.io/agent2sim-www.
Cite
Text
Yang et al. "Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos." International Conference on Learning Representations, 2025.Markdown
[Yang et al. "Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yang2025iclr-agenttosim/)BibTeX
@inproceedings{yang2025iclr-agenttosim,
title = {{Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos}},
author = {Yang, Gengshan and Bajcsy, Andrea and Saito, Shunsuke and Kanazawa, Angjoo},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/yang2025iclr-agenttosim/}
}