Training Deep Residual Networks for Uniform Approximation Guarantees
Abstract
It has recently been shown that deep residual networks with sufficiently high depth, but bounded width, are capable of universal approximation in the supremum norm sense. Based on these results, we show how to modify existing training algorithms for deep residual networks so as to provide approximation bounds for the test error, in the supremum norm, based on the training error. Our methods are based on control-theoretic interpretations of these networks both in discrete and continuous time, and establish that it is enough to suitably constrain the set of parameters being learned in a way that is compatible with most currently used training algorithms.
Cite
Text
Marchi et al. "Training Deep Residual Networks for Uniform Approximation Guarantees." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.Markdown
[Marchi et al. "Training Deep Residual Networks for Uniform Approximation Guarantees." Proceedings of the 3rd Conference on Learning for Dynamics and Control, 2021.](https://mlanthology.org/l4dc/2021/marchi2021l4dc-training/)BibTeX
@inproceedings{marchi2021l4dc-training,
title = {{Training Deep Residual Networks for Uniform Approximation Guarantees}},
author = {Marchi, Matteo and Gharesifard, Bahman and Tabuada, Paulo},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
year = {2021},
pages = {677-688},
volume = {144},
url = {https://mlanthology.org/l4dc/2021/marchi2021l4dc-training/}
}