Strengthening Layer Interaction via Dynamic Layer Attention
Abstract
Reinforcement Learning (RL) has shown great promise in domains like healthcare and robotics but often struggles with adoption due to its lack of interpretability. Counterfactual explanations, which address ``what if” scenarios, provide a promising avenue for understanding RL decisions but remain underexplored for continuous action spaces. We propose a novel approach for generating counterfactual explanations in continuous action RL by computing alternative action sequences that improve outcomes while minimizing deviations from the original sequence. Our approach leverages a distance metric for continuous actions and accounts for constraints such as adhering to predefined policies in specific states. Evaluations in two RL domains, Diabetes Control and Lunar Lander, demonstrate the effectiveness, efficiency, and generalization of our approach, enabling more interpretable and trustworthy RL applications.
Cite
Text
Wang et al. "Strengthening Layer Interaction via Dynamic Layer Attention." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/561Markdown
[Wang et al. "Strengthening Layer Interaction via Dynamic Layer Attention." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/wang2024ijcai-strengthening/) doi:10.24963/ijcai.2024/561BibTeX
@inproceedings{wang2024ijcai-strengthening,
title = {{Strengthening Layer Interaction via Dynamic Layer Attention}},
author = {Wang, Kaishen and Xia, Xun and Liu, Jian and Yi, Zhang and He, Tao},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {5073-5081},
doi = {10.24963/ijcai.2024/561},
url = {https://mlanthology.org/ijcai/2024/wang2024ijcai-strengthening/}
}