Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection

Abstract

We propose a new method for novelty detection that can tolerate high corruption of the training points, whereas previous works assumed either no or very low corruption. Our method trains a robust variational autoencoder (VAE), which aims to generate a model for the uncorrupted training points. To gain robustness to high corruption, we incorporate the following four changes to the common VAE: 1. Extracting crucial features of the latent code by a carefully designed dimension reduction component for distributions; 2. Modeling the latent distribution as a mixture of Gaussian low-rank inliers and full-rank outliers, where the testing only uses the inlier model; 3. Applying the Wasserstein-1 metric for regularization, instead of the Kullback-Leibler (KL) divergence; and 4. Using a robust error for reconstruction. We establish both robustness to outliers and suitability to low-rank modeling of the Wasserstein metric as opposed to the KL divergence. We illustrate state-of-the-art results on standard benchmarks.

Cite

Text

Lai et al. "Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection." Artificial Intelligence and Statistics, 2023.

Markdown

[Lai et al. "Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/lai2023aistats-robust/)

BibTeX

@inproceedings{lai2023aistats-robust,
  title     = {{Robust Variational Autoencoding with Wasserstein Penalty for Novelty Detection}},
  author    = {Lai, Chieh-Hsin and Zou, Dongmian and Lerman, Gilad},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2023},
  pages     = {3538-3567},
  volume    = {206},
  url       = {https://mlanthology.org/aistats/2023/lai2023aistats-robust/}
}