Synergy-of-Experts: Collaborate to Improve Adversarial Robustness
Abstract
Learning adversarially robust models require invariant predictions to a small neighborhood of its natural inputs, often encountering insufficient model capacity. There is research showing that learning multiple sub-models in an ensemble could mitigate this insufficiency, further improving the generalization and the robustness. However, the ensemble's voting-based strategy excludes the possibility that the true predictions remain with the minority. Therefore, this paper further improves the ensemble through a collaboration scheme---Synergy-of-Experts (SoE). Compared with the voting-based strategy, the SoE enables the possibility of correct predictions even if there exists a single correct sub-model. In SoE, every sub-model fits its specific vulnerability area and reserves the rest of the sub-models to fit other vulnerability areas, which effectively optimizes the utilization of the model capacity. Empirical experiments verify that SoE outperforms various ensemble methods against white-box and transfer-based adversarial attacks.
Cite
Text
Cui et al. "Synergy-of-Experts: Collaborate to Improve Adversarial Robustness." Neural Information Processing Systems, 2022.Markdown
[Cui et al. "Synergy-of-Experts: Collaborate to Improve Adversarial Robustness." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/cui2022neurips-synergyofexperts/)BibTeX
@inproceedings{cui2022neurips-synergyofexperts,
title = {{Synergy-of-Experts: Collaborate to Improve Adversarial Robustness}},
author = {Cui, Sen and Zhang, Jingfeng and Liang, Jian and Han, Bo and Sugiyama, Masashi and Zhang, Changshui},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/cui2022neurips-synergyofexperts/}
}