Hyperparameters Evidence and Generalisation for an Unrealisable Rule
Abstract
Using a statistical mechanical formalism we calculate the evidence, generalisation error and consistency measure for a linear percep(cid:173) tron trained and tested on a set of examples generated by a non linear teacher. The teacher is said to be unrealisable because the student can never model it without error. Our model allows us to interpolate between the known case of a linear teacher, and an un(cid:173) realisable, nonlinear teacher. A comparison of the hyperparameters which maximise the evidence with those that optimise the perfor(cid:173) mance measures reveals that, in the non-linear case, the evidence procedure is a misleading guide to optimising performance. Finally, we explore the extent to which the evidence procedure is unreliable and find that, despite being sub-optimal, in some circumstances it might be a useful method for fixing the hyperparameters.
Cite
Text
Marion and Saad. "Hyperparameters Evidence and Generalisation for an Unrealisable Rule." Neural Information Processing Systems, 1994.Markdown
[Marion and Saad. "Hyperparameters Evidence and Generalisation for an Unrealisable Rule." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/marion1994neurips-hyperparameters/)BibTeX
@inproceedings{marion1994neurips-hyperparameters,
title = {{Hyperparameters Evidence and Generalisation for an Unrealisable Rule}},
author = {Marion, Glenn and Saad, David},
booktitle = {Neural Information Processing Systems},
year = {1994},
pages = {255-262},
url = {https://mlanthology.org/neurips/1994/marion1994neurips-hyperparameters/}
}