Measuring Interpretability of Neural Policies of Robots with Disentangled Representation
Abstract
The advancement of robots, particularly those functioning in complex human-centric environments, relies on control solutions that are driven by machine learning. Understanding how learning-based controllers make decisions is crucial since robots are mostly safety-critical systems. This urges a formal and quantitative understanding of the explanatory factors in the interpretability of robot learning. In this paper, we aim to study interpretability of compact neural policies through the lens of disentangled representation. We leverage decision trees to obtain factors of variation [1] for disentanglement in robot learning; these encapsulate skills, behaviors, or strategies toward solving tasks. To assess how well networks uncover the underlying task dynamics, we introduce interpretability metrics that measure disentanglement of learned neural dynamics from a concentration of decisions, mutual information and modularity perspective. We showcase the effectiveness of the connection between interpretability and disentanglement consistently across extensive experimental analysis.
Cite
Text
Wang et al. "Measuring Interpretability of Neural Policies of Robots with Disentangled Representation." Conference on Robot Learning, 2023.Markdown
[Wang et al. "Measuring Interpretability of Neural Policies of Robots with Disentangled Representation." Conference on Robot Learning, 2023.](https://mlanthology.org/corl/2023/wang2023corl-measuring/)BibTeX
@inproceedings{wang2023corl-measuring,
title = {{Measuring Interpretability of Neural Policies of Robots with Disentangled Representation}},
author = {Wang, Tsun-Hsuan and Xiao, Wei and Seyde, Tim and Hasani, Ramin and Rus, Daniela},
booktitle = {Conference on Robot Learning},
year = {2023},
pages = {602-641},
volume = {229},
url = {https://mlanthology.org/corl/2023/wang2023corl-measuring/}
}