Model-Free Linear Quadratic Control via Reduction to Expert Prediction
Abstract
Model-free approaches for reinforcement learning (RL) and continuous control find policies based only on past states and rewards, without fitting a model of the system dynamics. They are appealing as they are general purpose and easy to implement; however, they also come with fewer theoretical guarantees than model-based RL. In this work, we present a new model-free algorithm for controlling linear quadratic (LQ) systems, and show that its regret scales as $O(T^{\xi+2/3})$ for any small $\xi>0$ if time horizon satisfies $T>C^{1/\xi}$ for a constant $C$. The algorithm is based on a reduction of control of Markov decision processes to an expert prediction problem. In practice, it corresponds to a variant of policy iteration with forced exploration, where the policy in each phase is greedy with respect to the average of all previous value functions. This is the first model-free algorithm for adaptive control of LQ systems that provably achieves sublinear regret and has a polynomial computation cost. Empirically, our algorithm dramatically outperforms standard policy iteration, but performs worse than a model-based approach.
Cite
Text
Abbasi-Yadkori et al. "Model-Free Linear Quadratic Control via Reduction to Expert Prediction." Artificial Intelligence and Statistics, 2019.Markdown
[Abbasi-Yadkori et al. "Model-Free Linear Quadratic Control via Reduction to Expert Prediction." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/abbasiyadkori2019aistats-modelfree/)BibTeX
@inproceedings{abbasiyadkori2019aistats-modelfree,
title = {{Model-Free Linear Quadratic Control via Reduction to Expert Prediction}},
author = {Abbasi-Yadkori, Yasin and Lazic, Nevena and Szepesvari, Csaba},
booktitle = {Artificial Intelligence and Statistics},
year = {2019},
pages = {3108-3117},
volume = {89},
url = {https://mlanthology.org/aistats/2019/abbasiyadkori2019aistats-modelfree/}
}