An Efficient Projection for L1,infinity Regularization
Abstract
In recent years the l<sub>1</sub>, <sub>∞</sub> norm has been proposed for joint regularization. In essence, this type of regularization aims at extending the l<sub>1</sub> framework for learning sparse models to a setting where the goal is to learn a set of jointly sparse models. In this paper we derive a simple and effective projected gradient method for optimization of l<sub>1</sub>, <sub>∞</sub> regularized problems. The main challenge in developing such a method resides on being able to compute efficient projections to the l<sub>1</sub>, <sub>∞</sub> ball. We present an algorithm that works in O(n log n) time and O(n) memory where n is the number of parameters. We test our algorithm in a multi-task image annotation problem. Our results show that l<sub>1</sub>, <sub>∞</sub> leads to better performance than both l<sub>2</sub> and l<sub>1</sub> regularization and that it is is effective in discovering jointly sparse solutions.
Cite
Text
Quattoni et al. "An Efficient Projection for L1,infinity Regularization." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553484Markdown
[Quattoni et al. "An Efficient Projection for L1,infinity Regularization." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/quattoni2009icml-efficient/) doi:10.1145/1553374.1553484BibTeX
@inproceedings{quattoni2009icml-efficient,
title = {{An Efficient Projection for L1,infinity Regularization}},
author = {Quattoni, Ariadna and Carreras, Xavier and Collins, Michael and Darrell, Trevor},
booktitle = {International Conference on Machine Learning},
year = {2009},
pages = {857-864},
doi = {10.1145/1553374.1553484},
url = {https://mlanthology.org/icml/2009/quattoni2009icml-efficient/}
}