On Empirical Risk Minimization with Dependent and Heavy-Tailed Data

Abstract

In this work, we establish risk bounds for Empirical Risk Minimization (ERM) with both dependent and heavy-tailed data-generating processes. We do so by extending the seminal works~\cite{pmlr-v35-mendelson14, mendelson2018learning} on the analysis of ERM with heavy-tailed but independent and identically distributed observations, to the strictly stationary exponentially $\beta$-mixing case. We allow for the interaction between the noise and inputs to be even polynomially heavy-tailed, which covers a significantly large class of heavy-tailed models beyond what is analyzed in the learning theory literature. We illustrate our theoretical results by obtaining rates of convergence for high-dimensional linear regression with dependent and heavy-tailed data.

Cite

Text

Roy et al. "On Empirical Risk Minimization with Dependent and Heavy-Tailed Data." Neural Information Processing Systems, 2021.

Markdown

[Roy et al. "On Empirical Risk Minimization with Dependent and Heavy-Tailed Data." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/roy2021neurips-empirical/)

BibTeX

@inproceedings{roy2021neurips-empirical,
  title     = {{On Empirical Risk Minimization with Dependent and Heavy-Tailed Data}},
  author    = {Roy, Abhishek and Balasubramanian, Krishnakumar and Erdogdu, Murat A},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/roy2021neurips-empirical/}
}