Barron and Cover’s Theory in Supervised Learning and Its Application to Lasso

Abstract

We study Barron and Cover’s theory (BC theory) in supervised learning. The original BC theory can be applied to supervised learning only approximately and limitedly. Though Barron (2008) and Chatterjee and Barron (2014) succeeded in removing the approximation, their idea cannot be essentially applied to supervised learning in general. By solving this issue, we propose an extension of BC theory to supervised learning. The extended theory has several advantages inherited from the original BC theory. First, it holds for finite sample number n. Second, it requires remarkably few assumptions. Third, it gives a justification of the MDL principle in supervised learning. We also derive new risk and regret bounds of lasso with random design as its application. The derived risk bound hold for any finite n without boundedness of features in contrast to past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning without approximation.

Cite

Text

Kawakita and Takeuchi. "Barron and Cover’s Theory in Supervised Learning and Its Application to Lasso." International Conference on Machine Learning, 2016.

Markdown

[Kawakita and Takeuchi. "Barron and Cover’s Theory in Supervised Learning and Its Application to Lasso." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/kawakita2016icml-barron/)

BibTeX

@inproceedings{kawakita2016icml-barron,
  title     = {{Barron and Cover’s Theory in Supervised Learning and Its Application to Lasso}},
  author    = {Kawakita, Masanori and Takeuchi, Jun’ichi},
  booktitle = {International Conference on Machine Learning},
  year      = {2016},
  pages     = {1958-1966},
  volume    = {48},
  url       = {https://mlanthology.org/icml/2016/kawakita2016icml-barron/}
}