Multitask Metric Learning: Theory and Algorithm

Abstract

In this paper, we study the problem of multitask metric learning (mtML). We first examine the generalization bound of the regularized mtML formulation based on the notion of algorithmic stability, proving the convergence rate of mtML and revealing the trade-off between the tasks. Moreover, we also establish the theoretical connection between the mtML, single-task learning and pooling-task learning approaches. In addition, we present a novel boosting-based mtML (mt-BML) algorithm, which scales well with the feature dimension of the data. Finally, we also devise an efficient second-order Riemannian retraction operator which is tailored specifically to our mt-BML algorithm. It produces a low-rank solution of mtML to reduce the model complexity, and may also improve generalization performances. Extensive evaluations on several benchmark data sets verify the effectiveness of our learning algorithm.

Cite

Text

Wang et al. "Multitask Metric Learning: Theory and Algorithm." Artificial Intelligence and Statistics, 2019.

Markdown

[Wang et al. "Multitask Metric Learning: Theory and Algorithm." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/wang2019aistats-multitask/)

BibTeX

@inproceedings{wang2019aistats-multitask,
  title     = {{Multitask Metric Learning: Theory and Algorithm}},
  author    = {Wang, Boyu and Zhang, Hejia and Liu, Peng and Shen, Zebang and Pineau, Joelle},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {3362-3371},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/wang2019aistats-multitask/}
}