Learning, Regularization and Ill-Posed Inverse Problems

Abstract

Many works have shown that strong connections relate learning from ex- amples to regularization techniques for ill-posed inverse problems. Nev- ertheless by now there was no formal evidence neither that learning from examples could be seen as an inverse problem nor that theoretical results in learning theory could be independently derived using tools from reg- ularization theory. In this paper we provide a positive answer to both questions. Indeed, considering the square loss, we translate the learning problem in the language of regularization theory and show that consis- tency results and optimal regularization parameter choice can be derived by the discretization of the corresponding inverse problem.

Cite

Text

Rosasco et al. "Learning, Regularization and Ill-Posed Inverse Problems." Neural Information Processing Systems, 2004.

Markdown

[Rosasco et al. "Learning, Regularization and Ill-Posed Inverse Problems." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/rosasco2004neurips-learning/)

BibTeX

@inproceedings{rosasco2004neurips-learning,
  title     = {{Learning, Regularization and Ill-Posed Inverse Problems}},
  author    = {Rosasco, Lorenzo and Caponnetto, Andrea and Vito, Ernesto D. and Odone, Francesca and Giovannini, Umberto D.},
  booktitle = {Neural Information Processing Systems},
  year      = {2004},
  pages     = {1145-1152},
  url       = {https://mlanthology.org/neurips/2004/rosasco2004neurips-learning/}
}