Training Deep Neural Networks via Direct Loss Minimization
Abstract
Supervised training of deep neural nets typically relies on minimizing cross-entropy. However, in many domains, we are interested in performing well on metrics specific to the application. In this paper we propose a direct loss minimization approach to train deep neural networks, which provably minimizes the application-specific loss function. This is often non-trivial, since these functions are neither smooth nor decomposable and thus are not amenable to optimization with standard gradient-based methods. We demonstrate the effectiveness of our approach in the context of maximizing average precision for ranking problems. Towards this goal, we develop a novel dynamic programming algorithm that can efficiently compute the weight updates. Our approach proves superior to a variety of baselines in the context of action classification and object detection, especially in the presence of label noise.
Cite
Text
Song et al. "Training Deep Neural Networks via Direct Loss Minimization." International Conference on Machine Learning, 2016.Markdown
[Song et al. "Training Deep Neural Networks via Direct Loss Minimization." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/song2016icml-training/)BibTeX
@inproceedings{song2016icml-training,
title = {{Training Deep Neural Networks via Direct Loss Minimization}},
author = {Song, Yang and Schwing, Alexander and Richard, and Urtasun, Raquel},
booktitle = {International Conference on Machine Learning},
year = {2016},
pages = {2169-2177},
volume = {48},
url = {https://mlanthology.org/icml/2016/song2016icml-training/}
}