Efficient Sampling for Learning Sparse Additive Models in High Dimensions
Abstract
We consider the problem of learning sparse additive models, i.e., functions of the form: $f(\vecx) = \sum_{l \in S} \phi_{l}(x_l)$, $\vecx \in \matR^d$ from point queries of $f$. Here $S$ is an unknown subset of coordinate variables with $\abs{S} = k \ll d$. Assuming $\phi_l$'s to be smooth, we propose a set of points at which to sample $f$ and an efficient randomized algorithm that recovers a \textit{uniform approximation} to each unknown $\phi_l$. We provide a rigorous theoretical analysis of our scheme along with sample complexity bounds. Our algorithm utilizes recent results from compressive sensing theory along with a novel convex quadratic program for recovering robust uniform approximations to univariate functions, from point queries corrupted with arbitrary bounded noise. Lastly we theoretically analyze the impact of noise -- either arbitrary but bounded, or stochastic -- on the performance of our algorithm.
Cite
Text
Tyagi et al. "Efficient Sampling for Learning Sparse Additive Models in High Dimensions." Neural Information Processing Systems, 2014.Markdown
[Tyagi et al. "Efficient Sampling for Learning Sparse Additive Models in High Dimensions." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/tyagi2014neurips-efficient/)BibTeX
@inproceedings{tyagi2014neurips-efficient,
title = {{Efficient Sampling for Learning Sparse Additive Models in High Dimensions}},
author = {Tyagi, Hemant and Gärtner, Bernd and Krause, Andreas},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {514-522},
url = {https://mlanthology.org/neurips/2014/tyagi2014neurips-efficient/}
}