Convex Learning with Invariances
Abstract
Incorporating invariances into a learning algorithm is a common problem in ma- chine learning. We provide a convex formulation which can deal with arbitrary loss functions and arbitrary losses. In addition, it is a drop-in replacement for most optimization algorithms for kernels, including solvers of the SVMStruct family. The advantage of our setting is that it relies on column generation instead of mod- ifying the underlying optimization problem directly.
Cite
Text
Teo et al. "Convex Learning with Invariances." Neural Information Processing Systems, 2007.Markdown
[Teo et al. "Convex Learning with Invariances." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/teo2007neurips-convex/)BibTeX
@inproceedings{teo2007neurips-convex,
title = {{Convex Learning with Invariances}},
author = {Teo, Choon H. and Globerson, Amir and Roweis, Sam T. and Smola, Alex J.},
booktitle = {Neural Information Processing Systems},
year = {2007},
pages = {1489-1496},
url = {https://mlanthology.org/neurips/2007/teo2007neurips-convex/}
}