Adversarial Learning for Robust Deep Clustering

Abstract

Deep clustering integrates embedding and clustering together to obtain the optimal nonlinear embedding space, which is more effective in real-world scenarios compared with conventional clustering methods. However, the robustness of the clustering network is prone to being attenuated especially when it encounters an adversarial attack. A small perturbation in the embedding space will lead to diverse clustering results since the labels are absent. In this paper, we propose a robust deep clustering method based on adversarial learning. Specifically, we first attempt to define adversarial samples in the embedding space for the clustering network. Meanwhile, we devise an adversarial attack strategy to explore samples that easily fool the clustering layers but do not impact the performance of the deep embedding. We then provide a simple yet efficient defense algorithm to improve the robustness of the clustering network. Experimental results on two popular datasets show that the proposed adversarial learning method can significantly enhance the robustness and further improve the overall clustering performance. Particularly, the proposed method is generally applicable to multiple existing clustering frameworks to boost their robustness. The source code is available at https://github.com/xdxuyang/ALRDC.

Cite

Text

Yang et al. "Adversarial Learning for Robust Deep Clustering." Neural Information Processing Systems, 2020.

Markdown

[Yang et al. "Adversarial Learning for Robust Deep Clustering." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/yang2020neurips-adversarial/)

BibTeX

@inproceedings{yang2020neurips-adversarial,
  title     = {{Adversarial Learning for Robust Deep Clustering}},
  author    = {Yang, Xu and Deng, Cheng and Wei, Kun and Yan, Junchi and Liu, Wei},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/yang2020neurips-adversarial/}
}