Robust Bi-Tempered Logistic Loss Based on Bregman Divergences

Abstract

We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence.

Cite

Text

Amid et al. "Robust Bi-Tempered Logistic Loss Based on Bregman Divergences." Neural Information Processing Systems, 2019.

Markdown

[Amid et al. "Robust Bi-Tempered Logistic Loss Based on Bregman Divergences." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/amid2019neurips-robust/)

BibTeX

@inproceedings{amid2019neurips-robust,
  title     = {{Robust Bi-Tempered Logistic Loss Based on Bregman Divergences}},
  author    = {Amid, Ehsan and Warmuth, Manfred K. and Anil, Rohan and Koren, Tomer},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {15013-15022},
  url       = {https://mlanthology.org/neurips/2019/amid2019neurips-robust/}
}