RedMotion: Motion Prediction via Redundancy Reduction

Abstract

We introduce RedMotion, a transformer model for motion prediction in self-driving vehicles that learns environment representations via redundancy reduction. Our first type of redundancy reduction is induced by an internal transformer decoder and reduces a variable-sized set of local road environment tokens, representing road graphs and agent data, to a fixed-sized global embedding. The second type of redundancy reduction is obtained by self-supervised learning and applies the redundancy reduction principle to embeddings generated from augmented views of road environments. Our experiments reveal that our representation learning approach outperforms PreTraM, Traj-MAE, and GraphDINO in a semi-supervised setting. Moreover, RedMotion achieves competitive results compared to HPTR or MTR++ in the Waymo Motion Prediction Challenge. Our open-source implementation is available at: https://github.com/kit-mrt/future-motion

Cite

Text

Wagner et al. "RedMotion: Motion Prediction via Redundancy Reduction." Transactions on Machine Learning Research, 2024.

Markdown

[Wagner et al. "RedMotion: Motion Prediction via Redundancy Reduction." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/wagner2024tmlr-redmotion/)

BibTeX

@article{wagner2024tmlr-redmotion,
  title     = {{RedMotion: Motion Prediction via Redundancy Reduction}},
  author    = {Wagner, Royden and Tas, Omer Sahin and Klemp, Marvin and Fernandez, Carlos and Stiller, Christoph},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/wagner2024tmlr-redmotion/}
}