On Model Selection Consistency of Penalized M-Estimators: A Geometric Theory
Abstract
Penalized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Often, the penalties are \emph{geometrically decomposable}, \ie\ can be expressed as a sum of (convex) support functions. We generalize the notion of irrepresentable to geometrically decomposable penalties and develop a general framework for establishing consistency and model selection consistency of M-estimators with such penalties. We then use this framework to derive results for some special cases of interest in bioinformatics and statistical learning.
Cite
Text
Lee et al. "On Model Selection Consistency of Penalized M-Estimators: A Geometric Theory." Neural Information Processing Systems, 2013.Markdown
[Lee et al. "On Model Selection Consistency of Penalized M-Estimators: A Geometric Theory." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/lee2013neurips-model/)BibTeX
@inproceedings{lee2013neurips-model,
title = {{On Model Selection Consistency of Penalized M-Estimators: A Geometric Theory}},
author = {Lee, Jason and Sun, Yuekai and Taylor, Jonathan E},
booktitle = {Neural Information Processing Systems},
year = {2013},
pages = {342-350},
url = {https://mlanthology.org/neurips/2013/lee2013neurips-model/}
}