NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
Abstract
While multi-exit neural networks are regarded as a promising solution for making efficient inference via early exits, combating adversarial attacks remains a challenging problem. In multi-exit networks, due to the high dependency among different submodels, an adversarial example targeting a specific exit not only degrades the performance of the target exit but also reduces the performance of all other exits concurrently. This makes multi-exit networks highly vulnerable to simple adversarial attacks. In this paper, we propose NEO-KD, a knowledge-distillation-based adversarial training strategy that tackles this fundamental challenge based on two key contributions. NEO-KD first resorts to neighbor knowledge distillation to guide the output of the adversarial examples to tend to the ensemble outputs of neighbor exits of clean data. NEO-KD also employs exit-wise orthogonal knowledge distillation for reducing adversarial transferability across different submodels. The result is a significantly improved robustness against adversarial attacks. Experimental results on various datasets/models show that our method achieves the best adversarial accuracy with reduced computation budgets, compared to the baselines relying on existing adversarial training or knowledge distillation techniques for multi-exit networks.
Cite
Text
Ham et al. "NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks." Neural Information Processing Systems, 2023.Markdown
[Ham et al. "NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/ham2023neurips-neokd/)BibTeX
@inproceedings{ham2023neurips-neokd,
title = {{NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks}},
author = {Ham, Seokil and Park, Jungwuk and Han, Dong-Jun and Moon, Jaekyun},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/ham2023neurips-neokd/}
}