For Every Generalization Action, Is There Really an Equal and Opposite Reaction?
Abstract
The “Conservation Law for Generalization Performance” [Schaffer, 1994] states that for any learning algorithm and bias, “generalization is a zero-sum enterprise.” In this paper we study the law and show that while the law is true, the manner in which the Conservation Law adds up generalization performance over all target concepts, without regard to the probability with which each concept occurs, is relevant only in a uniformly random universe. We then introduce a more meaningful measure of generalization, expected generalization performance. Unlike the Conservation Law's measure of generalization performance (which is, in essence, defined to be zero), expected generalization performance is conserved only when certain symmetric properties hold in our universe. There is no reason to believe, a priori, that such symmetries exist; learning algorithms may well exhibit non-zero (expected) generalization performance.
Cite
Text
Rao et al. "For Every Generalization Action, Is There Really an Equal and Opposite Reaction?." International Conference on Machine Learning, 1995. doi:10.1016/B978-1-55860-377-6.50065-7Markdown
[Rao et al. "For Every Generalization Action, Is There Really an Equal and Opposite Reaction?." International Conference on Machine Learning, 1995.](https://mlanthology.org/icml/1995/rao1995icml-every/) doi:10.1016/B978-1-55860-377-6.50065-7BibTeX
@inproceedings{rao1995icml-every,
title = {{For Every Generalization Action, Is There Really an Equal and Opposite Reaction?}},
author = {Rao, R. Bharat and Gordon, Diana F. and Spears, William M.},
booktitle = {International Conference on Machine Learning},
year = {1995},
pages = {471-479},
doi = {10.1016/B978-1-55860-377-6.50065-7},
url = {https://mlanthology.org/icml/1995/rao1995icml-every/}
}