Rate-Distortion Auto-Encoders

Abstract

A rekindled the interest in auto-encoder algorithms has been spurred by recent work on deep learning. Current efforts have been directed towards effective training of auto-encoder architectures with a large number of coding units. Here, we propose a learning algorithm for auto-encoders based on a rate-distortion objective that minimizes the mutual information between the inputs and the outputs of the auto-encoder subject to a fidelity constraint. The goal is to learn a representation that is minimally committed to the input data, but that is rich enough to reconstruct the inputs up to certain level of distortion. Minimizing the mutual information acts as a regularization term whereas the fidelity constraint can be understood as a risk functional in the conventional statistical learning setting. The proposed algorithm uses a recently introduced measure of entropy based on infinitely divisible matrices that avoids the plug in estimation of densities. Experiments using over-complete bases show that the rate-distortion auto-encoders can learn a regularized input-output mapping in an implicit manner.

Cite

Text

Giraldo and Príncipe. "Rate-Distortion Auto-Encoders." International Conference on Learning Representations, 2014.

Markdown

[Giraldo and Príncipe. "Rate-Distortion Auto-Encoders." International Conference on Learning Representations, 2014.](https://mlanthology.org/iclr/2014/giraldo2014iclr-rate/)

BibTeX

@inproceedings{giraldo2014iclr-rate,
  title     = {{Rate-Distortion Auto-Encoders}},
  author    = {Giraldo, Luis Gonzalo Sánchez and Príncipe, José C.},
  booktitle = {International Conference on Learning Representations},
  year      = {2014},
  url       = {https://mlanthology.org/iclr/2014/giraldo2014iclr-rate/}
}