Algorithmic Stability and Meta-Learning
Abstract
A mechnism of transfer learning is analysed, where samples drawn from different learning tasks of an environment are used to improve the learners performance on a new task. We give a general method to prove generalisation error bounds for such meta-algorithms. The method can be applied to the bias learning model of J. Baxter and to derive novel generalisation bounds for meta-algorithms searching spaces of uniformly stable algorithms. We also present an application to regularized least squares regression.
Cite
Text
Maurer. "Algorithmic Stability and Meta-Learning." Journal of Machine Learning Research, 2005.Markdown
[Maurer. "Algorithmic Stability and Meta-Learning." Journal of Machine Learning Research, 2005.](https://mlanthology.org/jmlr/2005/maurer2005jmlr-algorithmic/)BibTeX
@article{maurer2005jmlr-algorithmic,
title = {{Algorithmic Stability and Meta-Learning}},
author = {Maurer, Andreas},
journal = {Journal of Machine Learning Research},
year = {2005},
pages = {967-994},
volume = {6},
url = {https://mlanthology.org/jmlr/2005/maurer2005jmlr-algorithmic/}
}