Sample-Based Approximate Regularization

Abstract

We introduce a method for regularizing linearly parameterized functions using general derivative-based penalties, which relies on sampling as well as finite-difference approximations of the relevant derivatives. We call this approach sample-based approximate regularization (SAR). We provide theoretical guarantees on the fidelity of such regularizers, compared to those they approximate, and prove that the approximations converge efficiently. We also examine the empirical performance of SAR on several datasets.

Cite

Text

Bachman et al. "Sample-Based Approximate Regularization." International Conference on Machine Learning, 2014.

Markdown

[Bachman et al. "Sample-Based Approximate Regularization." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/bachman2014icml-samplebased/)

BibTeX

@inproceedings{bachman2014icml-samplebased,
  title     = {{Sample-Based Approximate Regularization}},
  author    = {Bachman, Philip and Farahmand, Amir-Massoud and Precup, Doina},
  booktitle = {International Conference on Machine Learning},
  year      = {2014},
  pages     = {1926-1934},
  volume    = {32},
  url       = {https://mlanthology.org/icml/2014/bachman2014icml-samplebased/}
}