Enforcing Robust Control Guarantees Within Neural Network Policies
Abstract
When designing controllers for safety-critical systems, practitioners often face a challenging tradeoff between robustness and performance. While robust control methods provide rigorous guarantees on system stability under certain worst-case disturbances, they often yield simple controllers that perform poorly in the average (non-worst) case. In contrast, nonlinear control methods trained using deep learning have achieved state-of-the-art performance on many control tasks, but often lack robustness guarantees. In this paper, we propose a technique that combines the strengths of these two approaches: constructing a generic nonlinear control policy class, parameterized by neural networks, that nonetheless enforces the same provable robustness criteria as robust control. Specifically, our approach entails integrating custom convex-optimization-based projection layers into a neural network-based policy. We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
Cite
Text
Donti et al. "Enforcing Robust Control Guarantees Within Neural Network Policies." International Conference on Learning Representations, 2021.Markdown
[Donti et al. "Enforcing Robust Control Guarantees Within Neural Network Policies." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/donti2021iclr-enforcing/)BibTeX
@inproceedings{donti2021iclr-enforcing,
title = {{Enforcing Robust Control Guarantees Within Neural Network Policies}},
author = {Donti, Priya L. and Roderick, Melrose and Fazlyab, Mahyar and Kolter, J Zico},
booktitle = {International Conference on Learning Representations},
year = {2021},
url = {https://mlanthology.org/iclr/2021/donti2021iclr-enforcing/}
}