Nesterov Meets Robust Multitask Learning Twice
Abstract
In this paper, we study temporal multitask learning problem where we impose smoothness constraint on time-series weights. Besides, to select important features, group lasso is introduced. Moreover, the regression loss in each time frame is non-squared to alleviate the influence of various scales of noise in each task, in addition to the nuclear norm for low-rank property. We first formulate the objective as a max-min problem, where the dual variable can be optimized via accelerated dual ascent method, while the primal variable can be solved via \textit{smoothed Fast Iterative Shrinkage-Thresholding Algorithm} (S-FISTA). We provide convergence analysis of the proposed method and experiments demonstrate its effectiveness.
Cite
Text
Kang and Liu. "Nesterov Meets Robust Multitask Learning Twice." NeurIPS 2023 Workshops: OPT, 2023.Markdown
[Kang and Liu. "Nesterov Meets Robust Multitask Learning Twice." NeurIPS 2023 Workshops: OPT, 2023.](https://mlanthology.org/neuripsw/2023/kang2023neuripsw-nesterov/)BibTeX
@inproceedings{kang2023neuripsw-nesterov,
title = {{Nesterov Meets Robust Multitask Learning Twice}},
author = {Kang, Yifan and Liu, Kai},
booktitle = {NeurIPS 2023 Workshops: OPT},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/kang2023neuripsw-nesterov/}
}