Model Complexity, Goodness of Fit and Diminishing Returns

Abstract

We investigate a general characteristic of the trade-off in learning problems between goodness-of-fit and model complexity. Specifi(cid:173) cally we characterize a general class of learning problems where the goodness-of-fit function can be shown to be convex within first(cid:173) order as a function of model complexity. This general property of "diminishing returns" is illustrated on a number of real data sets and learning problems, including finite mixture modeling and multivariate linear regression.

Cite

Text

Cadez and Smyth. "Model Complexity, Goodness of Fit and Diminishing Returns." Neural Information Processing Systems, 2000.

Markdown

[Cadez and Smyth. "Model Complexity, Goodness of Fit and Diminishing Returns." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/cadez2000neurips-model/)

BibTeX

@inproceedings{cadez2000neurips-model,
  title     = {{Model Complexity, Goodness of Fit and Diminishing Returns}},
  author    = {Cadez, Igor V. and Smyth, Padhraic},
  booktitle = {Neural Information Processing Systems},
  year      = {2000},
  pages     = {388-394},
  url       = {https://mlanthology.org/neurips/2000/cadez2000neurips-model/}
}