Multi-Task Learning and Algorithmic Stability

Abstract

In this paper, we study multi-task algorithms from the perspective of the algorithmic stability. We give a definition of the multi-task uniform stability, a generalization of the conventional uniform stability, which measures the maximum difference between the loss of a multi-task algorithm trained on a data set and that of the multi-task algorithm trained on the same data set but with a data point removed in each task. In order to analyze multi-task algorithms based on multi-task uniform stability, we prove a generalized McDiarmid's inequality which assumes the difference bound condition holds by changing multiple input arguments instead of only one in the conventional McDiarmid's inequality. By using the generalized McDiarmid's inequality as a tool, we can analyze the generalization performance of general multi-task algorithms in terms of the multi-task uniform stability. Moreover, as applications, we prove generalization bounds of several representative regularized multi-task algorithms.

Cite

Text

Zhang. "Multi-Task Learning and Algorithmic Stability." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9558

Markdown

[Zhang. "Multi-Task Learning and Algorithmic Stability." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/zhang2015aaai-multi/) doi:10.1609/AAAI.V29I1.9558

BibTeX

@inproceedings{zhang2015aaai-multi,
  title     = {{Multi-Task Learning and Algorithmic Stability}},
  author    = {Zhang, Yu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {3181-3187},
  doi       = {10.1609/AAAI.V29I1.9558},
  url       = {https://mlanthology.org/aaai/2015/zhang2015aaai-multi/}
}