Revisiting Stein's Paradox: Multi-Task Averaging

Abstract

We present a multi-task learning approach to jointly estimate the means of multiple independent distributions from samples. The proposed multi-task averaging (MTA) algorithm results in a convex combination of the individual task's sample averages. We derive the optimal amount of regularization for the two task case for the minimum risk estimator and a minimax estimator, and show that the optimal amount of regularization can be practically estimated without cross-validation. We extend the practical estimators to an arbitrary number of tasks. Simulations and real data experiments demonstrate the advantage of the proposed MTA estimators over standard averaging and James-Stein estimation.

Cite

Text

Feldman et al. "Revisiting Stein's Paradox: Multi-Task Averaging." Journal of Machine Learning Research, 2014.

Markdown

[Feldman et al. "Revisiting Stein's Paradox: Multi-Task Averaging." Journal of Machine Learning Research, 2014.](https://mlanthology.org/jmlr/2014/feldman2014jmlr-revisiting/)

BibTeX

@article{feldman2014jmlr-revisiting,
  title     = {{Revisiting Stein's Paradox: Multi-Task Averaging}},
  author    = {Feldman, Sergey and Gupta, Maya R. and Frigyik, Bela A.},
  journal   = {Journal of Machine Learning Research},
  year      = {2014},
  pages     = {3621-3662},
  volume    = {15},
  url       = {https://mlanthology.org/jmlr/2014/feldman2014jmlr-revisiting/}
}