Sorting Out Lipschitz Function Approximation

Abstract

Training neural networks under a strict Lipschitz constraint is useful for provable adversarial robustness, generalization bounds, interpretable gradients, and Wasserstein distance estimation. By the composition property of Lipschitz functions, it suffices to ensure that each individual affine transformation or nonlinear activation is 1-Lipschitz. The challenge is to do this while maintaining the expressive power. We identify a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation. Based on this, we propose to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices. We show that norm-constrained GroupSort architectures are universal Lipschitz function approximators. Empirically, we show that norm-constrained GroupSort networks achieve tighter estimates of Wasserstein distance than their ReLU counterparts and can achieve provable adversarial robustness guarantees with little cost to accuracy.

Cite

Text

Anil et al. "Sorting Out Lipschitz Function Approximation." International Conference on Machine Learning, 2019.

Markdown

[Anil et al. "Sorting Out Lipschitz Function Approximation." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/anil2019icml-sorting/)

BibTeX

@inproceedings{anil2019icml-sorting,
  title     = {{Sorting Out Lipschitz Function Approximation}},
  author    = {Anil, Cem and Lucas, James and Grosse, Roger},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {291-301},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/anil2019icml-sorting/}
}