Guided Learning of Nonconvex Models Through Successive Functional Gradient Optimization

Abstract

This paper presents a framework of successive functional gradient optimization for training nonconvex models such as neural networks, where training is driven by mirror descent in a function space. We provide a theoretical analysis and empirical study of the training method derived from this framework. It is shown that the method leads to better performance than that of standard training techniques.

Cite

Text

Johnson and Zhang. "Guided Learning of Nonconvex Models Through Successive Functional Gradient Optimization." International Conference on Machine Learning, 2020.

Markdown

[Johnson and Zhang. "Guided Learning of Nonconvex Models Through Successive Functional Gradient Optimization." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/johnson2020icml-guided/)

BibTeX

@inproceedings{johnson2020icml-guided,
  title     = {{Guided Learning of Nonconvex Models Through Successive Functional Gradient Optimization}},
  author    = {Johnson, Rie and Zhang, Tong},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {4921-4930},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/johnson2020icml-guided/}
}