Lipschitz-Certifiable Training with a Tight Outer Bound
Abstract
Verifiable training is a promising research direction for training a robust network. However, most verifiable training methods are slow or lack scalability. In this study, we propose a fast and scalable certifiable training algorithm based on Lipschitz analysis and interval arithmetic. Our certifiable training algorithm provides a tight propagated outer bound by introducing the box constraint propagation (BCP), and it efficiently computes the worst logit over the outer bound. In the experiments, we show that BCP achieves a tighter outer bound than the global Lipschitz-based outer bound. Moreover, our certifiable training algorithm is over 12 times faster than the state-of-the-art dual relaxation-based method; however, it achieves comparable or better verification performance, improving natural accuracy. Our fast certifiable training algorithm with the tight outer bound can scale to Tiny ImageNet with verification accuracy of 20.1\% ($\ell_2$-perturbation of $\epsilon=36/255$). Our code is available at \url{https://github.com/sungyoon-lee/bcp}.
Cite
Text
Lee et al. "Lipschitz-Certifiable Training with a Tight Outer Bound." Neural Information Processing Systems, 2020.Markdown
[Lee et al. "Lipschitz-Certifiable Training with a Tight Outer Bound." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/lee2020neurips-lipschitzcertifiable/)BibTeX
@inproceedings{lee2020neurips-lipschitzcertifiable,
title = {{Lipschitz-Certifiable Training with a Tight Outer Bound}},
author = {Lee, Sungyoon and Lee, Jaewook and Park, Saerom},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/lee2020neurips-lipschitzcertifiable/}
}