On the Forward Invariance of Neural ODEs
Abstract
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications by using invariance set propagation. Our approach uses a class of control barrier functions to transform output specifications into constraints on the parameters and inputs of the learning system. This setup allows us to achieve output specification guarantees simply by changing the constrained parameters/inputs both during training and inference. Moreover, we demonstrate that our invariance set propagation through data-controlled neural ODEs not only maintains generalization performance but also creates an additional degree of robustness by enabling causal manipulation of the system’s parameters/inputs. We test our method on a series of representation learning tasks, including modeling physical dynamics and convexity portraits, as well as safe collision avoidance for autonomous vehicles.
Cite
Text
Xiao et al. "On the Forward Invariance of Neural ODEs." International Conference on Machine Learning, 2023.Markdown
[Xiao et al. "On the Forward Invariance of Neural ODEs." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/xiao2023icml-forward/)BibTeX
@inproceedings{xiao2023icml-forward,
title = {{On the Forward Invariance of Neural ODEs}},
author = {Xiao, Wei and Wang, Tsun-Hsuan and Hasani, Ramin and Lechner, Mathias and Ban, Yutong and Gan, Chuang and Rus, Daniela},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {38100-38124},
volume = {202},
url = {https://mlanthology.org/icml/2023/xiao2023icml-forward/}
}