Risk Bound of Transfer Learning Using Parametric Feature Mapping and Its Application to Sparse Coding

Abstract

In this study, we consider a transfer-learning problem using the parameter transfer approach, in which a suitable parameter of feature mapping is learned through one task and applied to another objective task. We introduce the notion of local stability and parameter transfer learnability of parametric feature mapping, and derive an excess risk bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in self-taught learning. Although self-taught learning algorithms with a large volume of unlabeled data often show excellent empirical performance, their theoretical analysis has not yet been studied. In this paper, we also provide a theoretical excess risk bound for self-taught learning. In addition, we show that the results of numerical experiments agree with our theoretical analysis.

Cite

Text

Kumagai and Kanamori. "Risk Bound of Transfer Learning Using Parametric Feature Mapping and Its Application to Sparse Coding." Machine Learning, 2019. doi:10.1007/S10994-019-05805-2

Markdown

[Kumagai and Kanamori. "Risk Bound of Transfer Learning Using Parametric Feature Mapping and Its Application to Sparse Coding." Machine Learning, 2019.](https://mlanthology.org/mlj/2019/kumagai2019mlj-risk/) doi:10.1007/S10994-019-05805-2

BibTeX

@article{kumagai2019mlj-risk,
  title     = {{Risk Bound of Transfer Learning Using Parametric Feature Mapping and Its Application to Sparse Coding}},
  author    = {Kumagai, Wataru and Kanamori, Takafumi},
  journal   = {Machine Learning},
  year      = {2019},
  pages     = {1975-2008},
  doi       = {10.1007/S10994-019-05805-2},
  volume    = {108},
  url       = {https://mlanthology.org/mlj/2019/kumagai2019mlj-risk/}
}