Generalization Bounds for Adversarial Metric Learning

Abstract

Recently, adversarial metric learning has been proposed to enhance the robustness of the learned distance metric against adversarial perturbations. Despite rapid progress in validating its effectiveness empirically, theoretical guarantees on adversarial robustness and generalization are far less understood. To fill this gap, this paper focuses on unveiling the generalization properties of adversarial metric learning by developing the uniform convergence analysis techniques. Based on the capacity estimation of covering numbers, we establish the first high-probability generalization bounds with order O(n^-1/2) for adversarial metric learning with pairwise perturbations and general losses, where n is the number of training samples. Moreover, we obtain the refined generalization bounds with order O(n^-1) for the smooth loss by using local Rademacher complexity, which is faster than the previous result of adversarial pairwise learning, e.g., adversarial bipartite ranking. Experimental evaluation on real-world datasets validates our theoretical findings.

Cite

Text

Wen et al. "Generalization Bounds for Adversarial Metric Learning." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/489

Markdown

[Wen et al. "Generalization Bounds for Adversarial Metric Learning." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/wen2023ijcai-generalization/) doi:10.24963/IJCAI.2023/489

BibTeX

@inproceedings{wen2023ijcai-generalization,
  title     = {{Generalization Bounds for Adversarial Metric Learning}},
  author    = {Wen, Wen and Li, Han and Chen, Hong and Wu, Rui and Wu, Lingjuan and Zhu, Liangxuan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {4397-4405},
  doi       = {10.24963/IJCAI.2023/489},
  url       = {https://mlanthology.org/ijcai/2023/wen2023ijcai-generalization/}
}