Winner-Take-All Autoencoders

Abstract

In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.

Cite

Text

Makhzani and Frey. "Winner-Take-All Autoencoders." Neural Information Processing Systems, 2015.

Markdown

[Makhzani and Frey. "Winner-Take-All Autoencoders." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/makhzani2015neurips-winnertakeall/)

BibTeX

@inproceedings{makhzani2015neurips-winnertakeall,
  title     = {{Winner-Take-All Autoencoders}},
  author    = {Makhzani, Alireza and Frey, Brendan J.},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {2791-2799},
  url       = {https://mlanthology.org/neurips/2015/makhzani2015neurips-winnertakeall/}
}