Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
Abstract
Model-agnostic meta-reinforcement learning requires estimating the Hessian matrix of value functions. This is challenging from an implementation perspective, as repeatedly differentiating policy gradient estimates may lead to biased Hessian estimates. In this work, we provide a unifying framework for estimating higher-order derivatives of value functions, based on off-policy evaluation. Our framework interprets a number of prior approaches as special cases and elucidates the bias and variance trade-off of Hessian estimates. This framework also opens the door to a new family of estimates, which can be easily implemented with auto-differentiation libraries, and lead to performance gains in practice.
Cite
Text
Tang et al. "Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation." Neural Information Processing Systems, 2021.Markdown
[Tang et al. "Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/tang2021neurips-unifying/)BibTeX
@inproceedings{tang2021neurips-unifying,
title = {{Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation}},
author = {Tang, Yunhao and Kozuno, Tadashi and Rowland, Mark and Munos, Remi and Valko, Michal},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/tang2021neurips-unifying/}
}