Nonconvex Theory of $m$-Estimators with Decomposable Regularizers

Abstract

High-dimensional inference addresses scenarios where the dimension of the data approaches, or even surpasses, the sample size. In these settings, the regularized $M$-estimator is a common technique for inferring parameters. (Negahban et al., 2009) establish a unified framework for establishing convergence rates in the context of high-dimensional scaling, demonstrating that estimation errors are confined within a restricted set, and revealing fast convergence rates. The key assumption underlying their work is the convexity of the loss function. However, many loss functions in high-dimensional contexts are nonconvex. This leads to the question: if the loss function is nonconvex, do estimation errors still fall within a restricted set? If yes, can we recover convergence rates of the estimation error under nonconvex situations? This paper provides affirmative answers to these critical questions.

Cite

Text

Liu. "Nonconvex Theory of $m$-Estimators with Decomposable Regularizers." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Liu. "Nonconvex Theory of $m$-Estimators with Decomposable Regularizers." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/liu2025icml-nonconvex/)

BibTeX

@inproceedings{liu2025icml-nonconvex,
  title     = {{Nonconvex Theory of $m$-Estimators with Decomposable Regularizers}},
  author    = {Liu, Weiwei},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {39162-39170},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/liu2025icml-nonconvex/}
}