Max-Mahalanobis Linear Discriminant Analysis Networks

Abstract

A deep neural network (DNN) consists of a nonlinear transformation from an input to a feature representation, followed by a common softmax linear classifier. Though many efforts have been devoted to designing a proper architecture for nonlinear transformation, little investigation has been done on the classifier part. In this paper, we show that a properly designed classifier can improve robustness to adversarial attacks and lead to better prediction results. Specifically, we define a Max-Mahalanobis distribution (MMD) and theoretically show that if the input distributes as a MMD, the linear discriminant analysis (LDA) classifier will have the best robustness to adversarial examples. We further propose a novel Max-Mahalanobis linear discriminant analysis (MM-LDA) network, which explicitly maps a complicated data distribution in the input space to a MMD in the latent feature space and then applies LDA to make predictions. Our results demonstrate that the MM-LDA networks are significantly more robust to adversarial attacks, and have better performance in class-biased classification.

Cite

Text

Pang et al. "Max-Mahalanobis Linear Discriminant Analysis Networks." International Conference on Machine Learning, 2018.

Markdown

[Pang et al. "Max-Mahalanobis Linear Discriminant Analysis Networks." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/pang2018icml-maxmahalanobis/)

BibTeX

@inproceedings{pang2018icml-maxmahalanobis,
  title     = {{Max-Mahalanobis Linear Discriminant Analysis Networks}},
  author    = {Pang, Tianyu and Du, Chao and Zhu, Jun},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {4016-4025},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/pang2018icml-maxmahalanobis/}
}