Efficient Learning with Robust Gradient Descent

Abstract

Minimizing the empirical risk is a popular training strategy, but for learning tasks where the data may be noisy or heavy-tailed, one may require many observations in order to generalize well. To achieve better performance under less stringent requirements, we introduce a procedure which constructs a robust approximation of the risk gradient for use in an iterative learning routine. Using high-probability bounds on the excess risk of this algorithm, we show that our update does not deviate far from the ideal gradient-based update. Empirical tests using both controlled simulations and real-world benchmark data show that in diverse settings, the proposed procedure can learn more efficiently, using less resources (iterations and observations) while generalizing better.

Cite

Text

Holland and Ikeda. "Efficient Learning with Robust Gradient Descent." Machine Learning, 2019. doi:10.1007/S10994-019-05802-5

Markdown

[Holland and Ikeda. "Efficient Learning with Robust Gradient Descent." Machine Learning, 2019.](https://mlanthology.org/mlj/2019/holland2019mlj-efficient/) doi:10.1007/S10994-019-05802-5

BibTeX

@article{holland2019mlj-efficient,
  title     = {{Efficient Learning with Robust Gradient Descent}},
  author    = {Holland, Matthew J. and Ikeda, Kazushi},
  journal   = {Machine Learning},
  year      = {2019},
  pages     = {1523-1560},
  doi       = {10.1007/S10994-019-05802-5},
  volume    = {108},
  url       = {https://mlanthology.org/mlj/2019/holland2019mlj-efficient/}
}