Reforming Generative Autoencoders via Goodness-of-Fit Hypothesis Testing

Abstract

Generative models, while not new, have taken the deep learning field by storm. However, the widely used training methods have not exploited the substantial statistical literature concerning parametric distributional testing. Having sound theoretical foundations, these goodness-of-fit tests enable parts of the black box to be stripped away. In this paper we use the Shapiro-Wilk and propose a new multivari- ate generalization of Shapiro-Wilk to respec- tively test for univariate and multivariate nor- mality of the code layer of a generative autoen- coder. By replacing the discriminator in tradi- tional deep models with the hypothesis tests, we gain several advantages: objectively evalu- ate whether the encoder is actually embedding data onto a normal manifold, accurately define when convergence happens, explicitly balance between reconstruction and encoding training. Not only does our method produce competitive results, but it does so in a fraction of the time. We highlight the fact that the hypothesis tests used in our model asymptotically lead to the same solution of the L 2 -Wasserstein distance metrics used by several generative models to- day.

Cite

Text

Palmer et al. "Reforming Generative Autoencoders via Goodness-of-Fit Hypothesis Testing." Conference on Uncertainty in Artificial Intelligence, 2018.

Markdown

[Palmer et al. "Reforming Generative Autoencoders via Goodness-of-Fit Hypothesis Testing." Conference on Uncertainty in Artificial Intelligence, 2018.](https://mlanthology.org/uai/2018/palmer2018uai-reforming/)

BibTeX

@inproceedings{palmer2018uai-reforming,
  title     = {{Reforming Generative Autoencoders via Goodness-of-Fit Hypothesis Testing}},
  author    = {Palmer, Aaron and Dey, Dipak K. and Bi, Jinbo},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2018},
  pages     = {1009-1019},
  url       = {https://mlanthology.org/uai/2018/palmer2018uai-reforming/}
}