Forward Backward Greedy Algorithms for Multi-Task Learning with Faster Rates
Abstract
A large body of algorithms have been proposed for multi-task learning. However, the effectiveness of many multi-task learning algorithms highly depends on the structural regularization, which incurs bias in the resulting estimators and leads to slower convergence rate. In this paper, we aim at developing a multi-task learning algorithm with faster convergence rate. In particular, we propose a general estimator for multi-task learning with row sparsity constraint on the parameter matrix, i.e., the number of nonzero rows in the parameter matrix being small. The proposed estimator is a nonconvex optimization problem. In order to solve it, we develop a forward backward greedy algorithm with provable guarantee. More specifically, we prove that the approximate solution output by the greedy algorithm attains a sharper estimation error bound than many state-of-the-art multi-task learning methods. Moreover, our estimator enjoys model selection consistency under a mild condition. Thorough experiments on both synthetic and real-world data demonstrate the effectiveness of our method and back up our theory.
Cite
Text
Tian et al. "Forward Backward Greedy Algorithms for Multi-Task Learning with Faster Rates." Conference on Uncertainty in Artificial Intelligence, 2016.Markdown
[Tian et al. "Forward Backward Greedy Algorithms for Multi-Task Learning with Faster Rates." Conference on Uncertainty in Artificial Intelligence, 2016.](https://mlanthology.org/uai/2016/tian2016uai-forward/)BibTeX
@inproceedings{tian2016uai-forward,
title = {{Forward Backward Greedy Algorithms for Multi-Task Learning with Faster Rates}},
author = {Tian, Lu and Xu, Pan and Gu, Quanquan},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2016},
url = {https://mlanthology.org/uai/2016/tian2016uai-forward/}
}