Reinforcement Learning with Efficient Active Feature Acquisition
Abstract
Solving real-life sequential decision making problems under partial observability involves an exploration-exploitation problem. An agent needs to gather information about the state of the world for making rewarding decisions. However, in real-life, acquiring information is often highly costly, e.g., in the medical domain, information acquisition might correspond to performing a medical test on a patient. This poses a significant challenge for the agent to perform optimally for the task while reducing the cost for information acquisition. In this paper, we propose a model-based reinforcement learning framework that learns an active feature acquisition policy to solve the exploration-exploitation problem during its execution. Key to the success is a novel sequential variational auto-encoder. We demonstrate the efficacy of our proposed framework in a control domain as well as using a medical simulator, outperforming natural baselines and resulting in policies with greater cost efficiency.
Cite
Text
Yin et al. "Reinforcement Learning with Efficient Active Feature Acquisition." NeurIPS 2020 Workshops: LMCA, 2020.Markdown
[Yin et al. "Reinforcement Learning with Efficient Active Feature Acquisition." NeurIPS 2020 Workshops: LMCA, 2020.](https://mlanthology.org/neuripsw/2020/yin2020neuripsw-reinforcement/)BibTeX
@inproceedings{yin2020neuripsw-reinforcement,
title = {{Reinforcement Learning with Efficient Active Feature Acquisition}},
author = {Yin, Haiyan and Li, Yingzhen and Pan, Sinno and Zhang, Cheng and Tschiatschek, Sebastian},
booktitle = {NeurIPS 2020 Workshops: LMCA},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/yin2020neuripsw-reinforcement/}
}