Failing with Grace: Learning Neural Network Controllers That Are Boundedly Unsafe
Abstract
This work considers the problem of learning a feed-forward neural network controller to safely steer an arbitrarily shaped planar robot in a compact, obstacle-occluded workspace. When training neural network controllers, existing closed-loop safety assurances impose stringent data density requirements close to the boundary of the safe state space, which are hard to satisfy in practice. We propose an approach that lifts these strong assumptions and instead admits graceful safety violations, i.e., of a bounded, spatially controlled magnitude. The method employs reachability analysis techniques to include safety constraints in the training process. The method can simultaneously learn a safe vector field for the closed-loop system and provide proven numerical worst-case bounds on safety violations over the whole configuration space, defined by the overlap between an over-approximation of the closed-loop system’s forward reachable set and the set of unsafe states.
Cite
Text
Vlantis et al. "Failing with Grace: Learning Neural Network Controllers That Are Boundedly Unsafe." Proceedings of The 5th Annual Learning for Dynamics and Control Conference, 2023.Markdown
[Vlantis et al. "Failing with Grace: Learning Neural Network Controllers That Are Boundedly Unsafe." Proceedings of The 5th Annual Learning for Dynamics and Control Conference, 2023.](https://mlanthology.org/l4dc/2023/vlantis2023l4dc-failing/)BibTeX
@inproceedings{vlantis2023l4dc-failing,
title = {{Failing with Grace: Learning Neural Network Controllers That Are Boundedly Unsafe}},
author = {Vlantis, Panagiotis and Bridgeman, Leila and Zavlanos, Michael},
booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference},
year = {2023},
pages = {954-965},
volume = {211},
url = {https://mlanthology.org/l4dc/2023/vlantis2023l4dc-failing/}
}