Beyond Optimism: Exploration with Partially Observable Rewards

Abstract

Exploration in reinforcement learning (RL) remains an open challenge.RL algorithms rely on observing rewards to train the agent, and if informative rewards are sparse the agent learns slowly or may not learn at all. To improve exploration and reward discovery, popular algorithms rely on optimism. But what if sometimes rewards are unobservable, e.g., situations of partial monitoring in bandits and the recent formalism of monitored Markov decision process? In this case, optimism can lead to suboptimal behavior that does not explore further to collapse uncertainty.With this paper, we present a novel exploration strategy that overcomes the limitations of existing methods and guarantees convergence to an optimal policy even when rewards are not always observable. We further propose a collection of tabular environments for benchmarking exploration in RL (with and without unobservable rewards) and show that our method outperforms existing ones.

Cite

Text

Parisi et al. "Beyond Optimism: Exploration with Partially Observable Rewards." Neural Information Processing Systems, 2024. doi:10.52202/079017-2089

Markdown

[Parisi et al. "Beyond Optimism: Exploration with Partially Observable Rewards." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/parisi2024neurips-beyond/) doi:10.52202/079017-2089

BibTeX

@inproceedings{parisi2024neurips-beyond,
  title     = {{Beyond Optimism: Exploration with Partially Observable Rewards}},
  author    = {Parisi, Simone and Kazemipour, Alireza and Bowling, Michael},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2089},
  url       = {https://mlanthology.org/neurips/2024/parisi2024neurips-beyond/}
}