From Data Distributions to Regularization in Invariant Learning

Abstract

Ideally pattern recognition machines provide constant output when the inputs are transformed under a group 9 of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of g, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed un(cid:173) der the group.

Cite

Text

Leen. "From Data Distributions to Regularization in Invariant Learning." Neural Information Processing Systems, 1994.

Markdown

[Leen. "From Data Distributions to Regularization in Invariant Learning." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/leen1994neurips-data/)

BibTeX

@inproceedings{leen1994neurips-data,
  title     = {{From Data Distributions to Regularization in Invariant Learning}},
  author    = {Leen, Todd K.},
  booktitle = {Neural Information Processing Systems},
  year      = {1994},
  pages     = {223-230},
  url       = {https://mlanthology.org/neurips/1994/leen1994neurips-data/}
}