InAction: Interpretable Action Decision Making for Autonomous Driving
Abstract
Autonomous driving has attracted interest for interpretable action decision models that mimic human cognition. Existing interpretable autonomous driving models explore static human explanations, which ignore the implicit visual semantics that are not explicitly annotated or even consistent across annotators. In this paper, we propose a novel Interpretable Action decision making (InAction) model to provide an enriched explanation from both explicit human annotation and implicit visual semantics. First, a proposed visual-semantic module captures the region-based action-inducing components from the visual inputs, which learns the implicit visual semantics to provide a human-understandable explanation in action decision making. Second, an explicit reasoning module is developed by incorporating global visual features and action-inducing visual semantics, which aims to jointly align the human-annotated explanation and action decision making. Experimental results on two autonomous driving benchmarks demonstrate the effectiveness of our InAction model for explaining both implicitly and explicitly by comparing it to existing interpretable autonomous driving models.
Cite
Text
Jing et al. "InAction: Interpretable Action Decision Making for Autonomous Driving." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19839-7_22Markdown
[Jing et al. "InAction: Interpretable Action Decision Making for Autonomous Driving." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/jing2022eccv-inaction/) doi:10.1007/978-3-031-19839-7_22BibTeX
@inproceedings{jing2022eccv-inaction,
title = {{InAction: Interpretable Action Decision Making for Autonomous Driving}},
author = {Jing, Taotao and Xia, Haifeng and Tian, Renran and Ding, Haoran and Luo, Xiao and Domeyer, Joshua and Sherony, Rini and Ding, Zhengming},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19839-7_22},
url = {https://mlanthology.org/eccv/2022/jing2022eccv-inaction/}
}