Exponential Family Model-Based Reinforcement Learning via Score Matching
Abstract
We propose an optimistic model-based algorithm, dubbed SMRL, for finite-horizon episodic reinforcement learning (RL) when the transition model is specified by exponential family distributions with $d$ parameters and the reward is bounded and known. SMRL uses score matching, an unnormalized density estimation technique that enables efficient estimation of the model parameter by ridge regression. Under standard regularity assumptions, SMRL achieves $\tilde O(d\sqrt{H^3T})$ online regret, where $H$ is the length of each episode and $T$ is the total number of interactions (ignoring polynomial dependence on structural scale parameters).
Cite
Text
Li et al. "Exponential Family Model-Based Reinforcement Learning via Score Matching." Neural Information Processing Systems, 2022.Markdown
[Li et al. "Exponential Family Model-Based Reinforcement Learning via Score Matching." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/li2022neurips-exponential/)BibTeX
@inproceedings{li2022neurips-exponential,
title = {{Exponential Family Model-Based Reinforcement Learning via Score Matching}},
author = {Li, Gene and Li, Junbo and Kabra, Anmol and Srebro, Nati and Wang, Zhaoran and Yang, Zhuoran},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/li2022neurips-exponential/}
}