Choosing the Sample with Lowest Loss Makes SGD Robust
Abstract
The presence of outliers can potentially significantly skew the parameters of machine learning models trained via stochastic gradient descent (SGD). In this paper we propose a simple variant of the simple SGD method: in each step, first choose a set of k samples, then from these choose the one with the smallest current loss, and do an SGD-like update with this chosen sample. Vanilla SGD corresponds to $k=1$, i.e. no choice; $k>=2$ represents a new algorithm that is however effectively minimizing a non-convex surrogate loss. Our main contribution is a theoretical analysis of the robustness properties of this idea for ML problems which are sums of convex losses; these are backed up with synthetic and small-scale neural network experiments.
Cite
Text
Shah et al. "Choosing the Sample with Lowest Loss Makes SGD Robust." Artificial Intelligence and Statistics, 2020.Markdown
[Shah et al. "Choosing the Sample with Lowest Loss Makes SGD Robust." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/shah2020aistats-choosing/)BibTeX
@inproceedings{shah2020aistats-choosing,
title = {{Choosing the Sample with Lowest Loss Makes SGD Robust}},
author = {Shah, Vatsal and Wu, Xiaoxia and Sanghavi, Sujay},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {2120-2130},
volume = {108},
url = {https://mlanthology.org/aistats/2020/shah2020aistats-choosing/}
}