Error Estimation and Adaptive Tuning for Unregularized Robust M-Estimator

Abstract

We consider unregularized robust M-estimators for linear models under Gaussian design and heavy-tailed noise, in the proportional asymptotics regime where the sample size n and the number of features p are both increasing such that $p/n \to \gamma\in (0,1)$. An estimator of the out-of-sample error of a robust M-estimator is analyzed and proved to be consistent for a large family of loss functions that includes the Huber loss. As an application of this result, we propose an adaptive tuning procedure of the scale parameter $\lambda>0$ of a given loss function $\rho$: choosing $\hat \lambda$ in a given interval $I$ that minimizes the out-of-sample error estimate of the M-estimator constructed with loss $\rho_\lambda(\cdot) = \lambda^2 \rho(\cdot/\lambda)$ leads to the optimal out-of-sample error over $I$. The proof relies on a smoothing argument: the unregularized M-estimation objective function is perturbed, or smoothed, with a Ridge penalty that vanishes as $n\to+\infty$, and shows that the unregularized M-estimator of interest inherits properties of its smoothed version.

Cite

Text

Bellec and Koriyama. "Error Estimation and Adaptive Tuning for Unregularized Robust M-Estimator." Journal of Machine Learning Research, 2025.

Markdown

[Bellec and Koriyama. "Error Estimation and Adaptive Tuning for Unregularized Robust M-Estimator." Journal of Machine Learning Research, 2025.](https://mlanthology.org/jmlr/2025/bellec2025jmlr-error/)

BibTeX

@article{bellec2025jmlr-error,
  title     = {{Error Estimation and Adaptive Tuning for Unregularized Robust M-Estimator}},
  author    = {Bellec, Pierre C. and Koriyama, Takuya},
  journal   = {Journal of Machine Learning Research},
  year      = {2025},
  pages     = {1-40},
  volume    = {26},
  url       = {https://mlanthology.org/jmlr/2025/bellec2025jmlr-error/}
}