Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?

Abstract

Causal confusion is a phenomenon where an agent learns a policy that reflects imperfect spurious correlations in the data. The resulting causally confused behaviors may appear desirable during training but may fail at deployment. This problem gets exacerbated in domains such as robotics with potentially large gaps between open- and closed-loop performance of an agent. In such cases, a causally confused model may appear to perform well according to open-loop metrics but fail catastrophically when deployed in the real world. In this paper, we conduct the first study of causal confusion in offline reinforcement learning and hypothesize that selectively sampling data points that may help disambiguate the underlying causal mechanism of the environment may alleviate causal confusion. To investigate this hypothesis, we consider a set of simulated setups to study causal confusion and the ability of active sampling schemes to reduce its effects. We provide empirical evidence that random and active sampling schemes are able to consistently reduce causal confusion as training progresses and that active sampling is able to do so more efficiently than random sampling.

Cite

Text

Gupta et al. "Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?." NeurIPS 2022 Workshops: CML4Impact, 2022.

Markdown

[Gupta et al. "Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?." NeurIPS 2022 Workshops: CML4Impact, 2022.](https://mlanthology.org/neuripsw/2022/gupta2022neuripsw-active/)

BibTeX

@inproceedings{gupta2022neuripsw-active,
  title     = {{Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?}},
  author    = {Gupta, Gunshi and Rudner, Tim G. J. and McAllister, Rowan Thomas and Gaidon, Adrien and Gal, Yarin},
  booktitle = {NeurIPS 2022 Workshops: CML4Impact},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/gupta2022neuripsw-active/}
}