Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract)
Abstract
Current policy gradient techniques excel in refining policies over sampled states but falter when generalizing to unseen states. To address this, we introduce Reinforcement Sampling (RS), a novel method leveraging a generalizable action value function to sample improved decisions. RS is able to improve the decision quality whenever the action value estimation is accurate. It works by improving the agent's decision on the fly on the states the agent is visiting. Compared with the historically experienced states in which conventional policy gradient methods improve the policy, the currently visited states are more relevant to the agent. Our method sufficiently exploits the generalizability of the value function on unseen states and sheds new light on the future development of generalizable reinforcement learning.
Cite
Text
Kong et al. "Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I21.30466Markdown
[Kong et al. "Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract)." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/kong2024aaai-generalizable/) doi:10.1609/AAAI.V38I21.30466BibTeX
@inproceedings{kong2024aaai-generalizable,
title = {{Generalizable Policy Improvement via Reinforcement Sampling (Student Abstract)}},
author = {Kong, Rui and Wu, Chenyang and Zhang, Zongzhang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {23546-23547},
doi = {10.1609/AAAI.V38I21.30466},
url = {https://mlanthology.org/aaai/2024/kong2024aaai-generalizable/}
}