Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization

Abstract

Learning in a lifelong setting, where the dynamics continually evolve, is a hard challenge for current reinforcement learning algorithms. Yet this would be a much needed feature for practical applications. In this paper, we propose an approach which learns a hyper-policy, whose input is time, that outputs the parameters of the policy to be queried at that time. This hyper-policy is trained to maximize the estimated future performance, efficiently reusing past data by means of importance sampling, at the cost of introducing a controlled bias. We combine the future performance estimate with the past performance to mitigate catastrophic forgetting. To avoid overfitting the collected data, we derive a differentiable variance bound that we embed as a penalization term. Finally, we empirically validate our approach, in comparison with state-of-the-art algorithms, on realistic environments, including water resource management and trading.

Cite

Text

Liotet et al. "Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I7.20717

Markdown

[Liotet et al. "Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/liotet2022aaai-lifelong/) doi:10.1609/AAAI.V36I7.20717

BibTeX

@inproceedings{liotet2022aaai-lifelong,
  title     = {{Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization}},
  author    = {Liotet, Pierre and Vidaich, Francesco and Metelli, Alberto Maria and Restelli, Marcello},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {7525-7533},
  doi       = {10.1609/AAAI.V36I7.20717},
  url       = {https://mlanthology.org/aaai/2022/liotet2022aaai-lifelong/}
}