Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents

Abstract

Goal misalignment, reward sparsity and difficult credit assignment are only a few of the many issues that make it difficult for deep reinforcement learning (RL) agents to learn optimal policies. Unfortunately, the black-box nature of deep neural networks impedes the inclusion of domain experts for inspecting the model and revising suboptimal policies.To this end, we introduce Successive Concept Bottleneck Agents (SCoBots), that integrate consecutive concept bottleneck (CB) layers. In contrast to current CB models, SCoBots do not just represent concepts as properties of individual objects, but also as relations between objects which is crucial for many RL tasks. Our experimental results provide evidence of SCoBots' competitive performances, but also of their potential for domain experts to understand and regularize their behavior. Among other things, SCoBots enabled us to identify a previously unknown misalignment problem in the iconic video game, Pong, and resolve it. Overall, SCoBots thus result in more human-aligned RL agents.

Cite

Text

Delfosse et al. "Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents." Neural Information Processing Systems, 2024. doi:10.52202/079017-2135

Markdown

[Delfosse et al. "Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/delfosse2024neurips-interpretable/) doi:10.52202/079017-2135

BibTeX

@inproceedings{delfosse2024neurips-interpretable,
  title     = {{Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents}},
  author    = {Delfosse, Quentin and Sztwiertnia, Sebastian and Rothermel, Mark and Stammer, Wolfgang and Kersting, Kristian},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2135},
  url       = {https://mlanthology.org/neurips/2024/delfosse2024neurips-interpretable/}
}