Scalable Non-Linear Learning with Adaptive Polynomial Expansions
Abstract
Can we effectively learn a nonlinear representation in time comparable to linear learning? We describe a new algorithm that explicitly and adaptively expands higher-order interaction features over base linear representations. The algorithm is designed for extreme computational efficiency, and an extensive experimental study shows that its computation/prediction tradeoff ability compares very favorably against strong baselines.
Cite
Text
Agarwal et al. "Scalable Non-Linear Learning with Adaptive Polynomial Expansions." Neural Information Processing Systems, 2014.Markdown
[Agarwal et al. "Scalable Non-Linear Learning with Adaptive Polynomial Expansions." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/agarwal2014neurips-scalable/)BibTeX
@inproceedings{agarwal2014neurips-scalable,
title = {{Scalable Non-Linear Learning with Adaptive Polynomial Expansions}},
author = {Agarwal, Alekh and Beygelzimer, Alina and Hsu, Daniel J. and Langford, John and Telgarsky, Matus J},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {2051-2059},
url = {https://mlanthology.org/neurips/2014/agarwal2014neurips-scalable/}
}