On the Adversarial Robustness of Out-of-Distribution Generalization Models

Abstract

Out-of-distribution (OOD) generalization has attracted increasing research attention in recent years, due to its promising experimental results in real-world applications. Interestingly, we find that existing OOD generalization methods are vulnerable to adversarial attacks. This motivates us to study OOD adversarial robustness. We first present theoretical analyses of OOD adversarial robustness in two different complementary settings. Motivated by the theoretical results, we design two algorithms to improve the OOD adversarial robustness. Finally, we conduct experiments to validate the effectiveness of our proposed algorithms.

Cite

Text

Zou and Liu. "On the Adversarial Robustness of Out-of-Distribution Generalization Models." Neural Information Processing Systems, 2023.

Markdown

[Zou and Liu. "On the Adversarial Robustness of Out-of-Distribution Generalization Models." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/zou2023neurips-adversarial/)

BibTeX

@inproceedings{zou2023neurips-adversarial,
  title     = {{On the Adversarial Robustness of Out-of-Distribution Generalization Models}},
  author    = {Zou, Xin and Liu, Weiwei},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/zou2023neurips-adversarial/}
}