Data Sparse Nonparametric Regression with $ε$-Insensitive Losses
Abstract
Leveraging the celebrated support vector regression (SVR) method, we propose a unifying framework in order to deliver regression machines in reproducing kernel Hilbert spaces (RKHSs) with data sparsity. The central point is a new definition of $ε$-insensitivity, valid for many regression losses (including quantile and expectile regression) and their multivariate extensions. We show that the dual optimization problem to empirical risk minimization with $ε$-insensitivity involves a data sparse regularization. We also provide an analysis of the excess of risk as well as a randomized coordinate descent algorithm for solving the dual. Numerical experiments validate our approach.
Cite
Text
Sangnier et al. "Data Sparse Nonparametric Regression with $ε$-Insensitive Losses." Proceedings of the Ninth Asian Conference on Machine Learning, 2017.Markdown
[Sangnier et al. "Data Sparse Nonparametric Regression with $ε$-Insensitive Losses." Proceedings of the Ninth Asian Conference on Machine Learning, 2017.](https://mlanthology.org/acml/2017/sangnier2017acml-data/)BibTeX
@inproceedings{sangnier2017acml-data,
title = {{Data Sparse Nonparametric Regression with $ε$-Insensitive Losses}},
author = {Sangnier, Maxime and Fercoq, Olivier and d’Alché-Buc, Florence},
booktitle = {Proceedings of the Ninth Asian Conference on Machine Learning},
year = {2017},
pages = {192-207},
volume = {77},
url = {https://mlanthology.org/acml/2017/sangnier2017acml-data/}
}