Learning the Linear Quadratic Regulator from Nonlinear Observations

Abstract

We introduce a new problem setting for continuous control called the LQR with Rich Observations, or RichLQR. In our setting, the environment is summarized by a low-dimensional continuous latent state with linear dynamics and quadratic costs, but the agent operates on high-dimensional, nonlinear observations such as images from a camera. To enable sample-efficient learning, we assume that the learner has access to a class of decoder functions (e.g., neural networks) that is flexible enough to capture the mapping from observations to latent states. We introduce a new algorithm, RichID, which learns a near-optimal policy for the RichLQR with sample complexity scaling only with the dimension of the latent state space and the capacity of the decoder function class. RichID is oracle-efficient and accesses the decoder class only through calls to a least-squares regression oracle. To our knowledge, our results constitute the first provable sample complexity guarantee for continuous control with an unknown nonlinearity in the system model.

Cite

Text

Mhammedi et al. "Learning the Linear Quadratic Regulator from Nonlinear Observations." Neural Information Processing Systems, 2020.

Markdown

[Mhammedi et al. "Learning the Linear Quadratic Regulator from Nonlinear Observations." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/mhammedi2020neurips-learning/)

BibTeX

@inproceedings{mhammedi2020neurips-learning,
  title     = {{Learning the Linear Quadratic Regulator from Nonlinear Observations}},
  author    = {Mhammedi, Zakaria and Foster, Dylan J and Simchowitz, Max and Misra, Dipendra and Sun, Wen and Krishnamurthy, Akshay and Rakhlin, Alexander and Langford, John},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/mhammedi2020neurips-learning/}
}