Topological Detection of Trojaned Neural Networks

Abstract

Deep neural networks are known to have security issues. One particular threat is the Trojan attack. It occurs when the attackers stealthily manipulate the model's behavior through Trojaned training samples, which can later be exploited. Guided by basic neuroscientific principles, we discover subtle -- yet critical -- structural deviation characterizing Trojaned models. In our analysis we use topological tools. They allow us to model high-order dependencies in the networks, robustly compare different networks, and localize structural abnormalities. One interesting observation is that Trojaned models develop short-cuts from shallow to deep layers. Inspired by these observations, we devise a strategy for robust detection of Trojaned models. Compared to standard baselines it displays better performance on multiple benchmarks.

Cite

Text

Zheng et al. "Topological Detection of Trojaned Neural Networks." Neural Information Processing Systems, 2021.

Markdown

[Zheng et al. "Topological Detection of Trojaned Neural Networks." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zheng2021neurips-topological/)

BibTeX

@inproceedings{zheng2021neurips-topological,
  title     = {{Topological Detection of Trojaned Neural Networks}},
  author    = {Zheng, Songzhu and Zhang, Yikai and Wagner, Hubert and Goswami, Mayank and Chen, Chao},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/zheng2021neurips-topological/}
}