Learning with Subquadratic Regularization : A Primal-Dual Approach
Abstract
Subquadratic norms have been studied recently in the context of structured sparsity, which has been shown to be more beneficial than conventional regularizers in applications such as image denoising, compressed sensing, banded covariance estimation, etc. While existing works have been successful in learning structured sparse models such as trees, graphs, their associated optimization procedures have been inefficient because of hard-to-evaluate proximal operators of the norms. In this paper, we study the computational aspects of learning with subquadratic norms in a general setup. Our main contributions are two proximal-operator based algorithms ADMM-η and CP-η, which generically apply to these learning problems with convex loss functions, and achieve a proven rate of convergence of O(1/T) after T iterations. These algorithms are derived in a primal-dual framework, which have not been examined for subquadratic norms. We illustrate the efficiency of the algorithms developed in the context of tree-structured sparsity, where they comprehensively outperform relevant baselines.
Cite
Text
Sankaran et al. "Learning with Subquadratic Regularization : A Primal-Dual Approach." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/272Markdown
[Sankaran et al. "Learning with Subquadratic Regularization : A Primal-Dual Approach." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/sankaran2020ijcai-learning/) doi:10.24963/IJCAI.2020/272BibTeX
@inproceedings{sankaran2020ijcai-learning,
title = {{Learning with Subquadratic Regularization : A Primal-Dual Approach}},
author = {Sankaran, Raman and Bach, Francis R. and Bhattacharyya, Chiranjib},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {1963-1969},
doi = {10.24963/IJCAI.2020/272},
url = {https://mlanthology.org/ijcai/2020/sankaran2020ijcai-learning/}
}