L1 Regularization in Infinite Dimensional Feature Spaces
Abstract
In this paper we discuss the problem of fitting ℓ_1 regularized prediction models in infinite (possibly non-countable) dimensional feature spaces. Our main contributions are: a. Deriving a generalization of ℓ_1 regularization based on measures which can be applied in non-countable feature spaces; b. Proving that the sparsity property of ℓ_1 regularization is maintained in infinite dimensions; c. Devising a path-following algorithm that can generate the set of regularized solutions in “nice” feature spaces; and d. Presenting an example of penalized spline models where this path following algorithm is computationally feasible, and gives encouraging empirical results.
Cite
Text
Rosset et al. "L1 Regularization in Infinite Dimensional Feature Spaces." Annual Conference on Computational Learning Theory, 2007. doi:10.1007/978-3-540-72927-3_39Markdown
[Rosset et al. "L1 Regularization in Infinite Dimensional Feature Spaces." Annual Conference on Computational Learning Theory, 2007.](https://mlanthology.org/colt/2007/rosset2007colt-l/) doi:10.1007/978-3-540-72927-3_39BibTeX
@inproceedings{rosset2007colt-l,
title = {{L1 Regularization in Infinite Dimensional Feature Spaces}},
author = {Rosset, Saharon and Swirszcz, Grzegorz and Srebro, Nathan and Zhu, Ji},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2007},
pages = {544-558},
doi = {10.1007/978-3-540-72927-3_39},
url = {https://mlanthology.org/colt/2007/rosset2007colt-l/}
}