A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks

Abstract

Counterexample-guided repair aims at creating neural networks with mathematical safety guarantees, facilitating the application of neural networks in safety-critical domains. However, whether counterexample-guided repair is guaranteed to terminate remains an open question. We approach this question by showing that counterexample-guided repair can be viewed as a robust optimisation algorithm. While termination guarantees for neural network repair itself remain beyond our reach, we prove termination for more restrained machine learning models and disprove termination in a general setting. We empirically study the practical implications of our theoretical results, demonstrating the suitability of common verifiers and falsifiers for repair despite a disadvantageous theoretical result. Additionally, we use our theoretical insights to devise a novel algorithm for repairing linear regression models based on quadratic programming, surpassing existing approaches.

Cite

Text

Boetius et al. "A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks." International Conference on Machine Learning, 2023.

Markdown

[Boetius et al. "A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/boetius2023icml-robust/)

BibTeX

@inproceedings{boetius2023icml-robust,
  title     = {{A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks}},
  author    = {Boetius, David and Leue, Stefan and Sutter, Tobias},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {2712-2737},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/boetius2023icml-robust/}
}