Randomized Value Functions via Multiplicative Normalizing Flows
Abstract
Randomized value functions offer a promising approach towards the challenge of efficient exploration in complex environments with high dimensional state and action spaces. Unlike traditional point estimate methods, randomized value functions maintain a posterior distribution over action-space values. This prevents the agent’s behavior policy from prematurely exploiting early estimates and falling into local optima. In this work, we leverage recent advances in variational Bayesian neural networks and combine these with traditional Deep Q-Networks (DQN) and Deep Deterministic Policy Gradient (DDPG) to achieve randomized value functions for high-dimensional domains. In particular, we augment DQN and DDPG with multiplicative normalizing flows in order to track a rich approximate posterior distribution over the parameters of the value function. This allows the agent to perform approximate Thompson sampling in a computationally efficient manner via stochastic gradient methods. We demonstrate the benefits of our approach through an empirical comparison in high dimensional environments.
Cite
Text
Touati et al. "Randomized Value Functions via Multiplicative Normalizing Flows." Uncertainty in Artificial Intelligence, 2019.Markdown
[Touati et al. "Randomized Value Functions via Multiplicative Normalizing Flows." Uncertainty in Artificial Intelligence, 2019.](https://mlanthology.org/uai/2019/touati2019uai-randomized/)BibTeX
@inproceedings{touati2019uai-randomized,
title = {{Randomized Value Functions via Multiplicative Normalizing Flows}},
author = {Touati, Ahmed and Satija, Harsh and Romoff, Joshua and Pineau, Joelle and Vincent, Pascal},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2019},
pages = {422-432},
volume = {115},
url = {https://mlanthology.org/uai/2019/touati2019uai-randomized/}
}