Robust Variance-Regularized Risk Minimization with Concomitant Scaling
Abstract
Under losses which are potentially heavy-tailed, we consider the task of minimizing sums of the loss mean and standard deviation, without trying to accurately estimate the variance. By modifying a technique for variance-free robust mean estimation to fit our problem setting, we derive a simple learning procedure which can be easily combined with standard gradient-based solvers to be used in traditional machine learning workflows. Empirically, we verify that our proposed approach, despite its simplicity, performs as well or better than even the best-performing candidates derived from alternative criteria such as CVaR or DRO risks on a variety of datasets.
Cite
Text
Holland. "Robust Variance-Regularized Risk Minimization with Concomitant Scaling." Artificial Intelligence and Statistics, 2024.Markdown
[Holland. "Robust Variance-Regularized Risk Minimization with Concomitant Scaling." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/holland2024aistats-robust/)BibTeX
@inproceedings{holland2024aistats-robust,
title = {{Robust Variance-Regularized Risk Minimization with Concomitant Scaling}},
author = {Holland, Matthew J.},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {1144-1152},
volume = {238},
url = {https://mlanthology.org/aistats/2024/holland2024aistats-robust/}
}