Dual Expert Distillation Network for Generalized Zero-Shot Learning
Abstract
Recently, numerous methods have been proposed to enhance the robustness of the Graph Convolutional Networks (GCNs) for their vulnerability against adversarial attacks. Despite their empirical success, a significant gap remains in understanding GCNs' adversarial robustness from the theoretical perspective. This paper addresses this gap by analyzing generalization against both node and structure attacks for multi-layer GCNs through the framework of uniform stability. Under the smoothness assumption of the loss function, we establish the first adversarial generalization bound of GCNs in expectation. Our theoretical analysis contributes to a deeper understanding of how adversarial perturbations and graph architectures influence generalization performance, which provides meaningful insights for designing robust models. Experimental results on benchmark datasets confirm the validity of our theoretical findings, highlighting their practical significance.
Cite
Text
Rao et al. "Dual Expert Distillation Network for Generalized Zero-Shot Learning." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/534Markdown
[Rao et al. "Dual Expert Distillation Network for Generalized Zero-Shot Learning." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/rao2024ijcai-dual/) doi:10.24963/ijcai.2024/534BibTeX
@inproceedings{rao2024ijcai-dual,
title = {{Dual Expert Distillation Network for Generalized Zero-Shot Learning}},
author = {Rao, Zhijie and Guo, Jingcai and Lu, Xiaocheng and Liang, Jingming and Zhang, Jie and Wang, Haozhao and Wei, Kang and Cao, Xiaofeng},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {4833-4841},
doi = {10.24963/ijcai.2024/534},
url = {https://mlanthology.org/ijcai/2024/rao2024ijcai-dual/}
}