Non-Asymptotic Guarantees for Robust Statistical Learning Under Infinite Variance Assumption

Abstract

There has been a surge of interest in developing robust estimators for models with heavy-tailed and bounded variance data in statistics and machine learning, while few works impose unbounded variance. This paper proposes two types of robust estimators, the ridge log-truncated M-estimator and the elastic net log-truncated M-estimator. The first estimator is applied to convex regressions such as quantile regression and generalized linear models, while the other one is applied to high dimensional non-convex learning problems such as regressions via deep neural networks. Simulations and real data analysis demonstrate the robustness of log-truncated estimations over standard estimations.

Cite

Text

Xu et al. "Non-Asymptotic Guarantees for Robust Statistical Learning Under Infinite Variance Assumption." Journal of Machine Learning Research, 2023.

Markdown

[Xu et al. "Non-Asymptotic Guarantees for Robust Statistical Learning Under Infinite Variance Assumption." Journal of Machine Learning Research, 2023.](https://mlanthology.org/jmlr/2023/xu2023jmlr-nonasymptotic/)

BibTeX

@article{xu2023jmlr-nonasymptotic,
  title     = {{Non-Asymptotic Guarantees for Robust Statistical Learning Under Infinite Variance Assumption}},
  author    = {Xu, Lihu and Yao, Fang and Yao, Qiuran and Zhang, Huiming},
  journal   = {Journal of Machine Learning Research},
  year      = {2023},
  pages     = {1-46},
  volume    = {24},
  url       = {https://mlanthology.org/jmlr/2023/xu2023jmlr-nonasymptotic/}
}