Raising Context Awareness in Motion Forecasting
Abstract
Learning-based trajectory prediction models have encountered great success, with the promise of leveraging contextual information in addition to motion history. Yet, we find that state-of-the-art forecasting methods tend to overly rely on the agent’s current dynamics, failing to exploit the semantic contextual cues provided at its input. To alleviate this issue, we introduce CAB, a motion forecasting model equipped with a training procedure designed to promote the use of semantic contextual information. We also introduce two novel metrics — dispersion and convergence-to-range — to measure the temporal consistency of successive forecasts, which we found missing in standard metrics. Our method is evaluated on the widely adopted nuScenes Prediction benchmark as well as on a subset of the most difficult examples from this benchmark. The code is available at github.com/valeoai/CAB.
Cite
Text
Ben-Younes et al. "Raising Context Awareness in Motion Forecasting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00487Markdown
[Ben-Younes et al. "Raising Context Awareness in Motion Forecasting." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/benyounes2022cvprw-raising/) doi:10.1109/CVPRW56347.2022.00487BibTeX
@inproceedings{benyounes2022cvprw-raising,
title = {{Raising Context Awareness in Motion Forecasting}},
author = {Ben-Younes, Hedi and Zablocki, Éloi and Chen, Mickaël and Pérez, Patrick and Cord, Matthieu},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2022},
pages = {4408-4417},
doi = {10.1109/CVPRW56347.2022.00487},
url = {https://mlanthology.org/cvprw/2022/benyounes2022cvprw-raising/}
}