Parameterized Rate-Distortion Stochastic Encoder

Abstract

We propose a novel gradient-based tractable approach for the Blahut-Arimoto (BA) algorithm to compute the rate-distortion function where the BA algorithm is fully parameterized. This results in a rich and flexible framework to learn a new class of stochastic encoders, termed PArameterized RAte-DIstortion Stochastic Encoder (PARADISE). The framework can be applied to a wide range of settings from semi-supervised, multi-task to supervised and robust learning. We show that the training objective of PARADISE can be seen as a form of regularization that helps improve generalization. With an emphasis on robust learning we further develop a novel posterior matching objective to encourage smoothness on the loss function and show that PARADISE can significantly improve interpretability as well as robustness to adversarial attacks on the CIFAR-10 and ImageNet datasets. In particular, on the CIFAR-10 dataset, our model reduces standard and adversarial error rates in comparison to the state-of-the-art by 50% and 41%, respectively without the expensive computational cost of adversarial training.

Cite

Text

Hoang et al. "Parameterized Rate-Distortion Stochastic Encoder." International Conference on Machine Learning, 2020.

Markdown

[Hoang et al. "Parameterized Rate-Distortion Stochastic Encoder." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/hoang2020icml-parameterized/)

BibTeX

@inproceedings{hoang2020icml-parameterized,
  title     = {{Parameterized Rate-Distortion Stochastic Encoder}},
  author    = {Hoang, Quan and Le, Trung and Phung, Dinh},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {4293-4303},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/hoang2020icml-parameterized/}
}