Linear Constraints on Weight Representation for Generalized Learning of Multilayer Networks

Abstract

In this article, we present a technique to improve the generalization ability of multilayer neural networks. The proposed method introduces linear constraints on weight representation based on the invariance natures of training targets. We propose a learning method that introduces effective linear constraints into an error function as a penalty term. Furthermore, introduction of such constraints leads to reduction of the VC dimension of neural networks. We show bounds on the VC dimension of the neural networks with such constraints. Finally, we demonstrate the effectiveness of the proposed method by some experiments.

Cite

Text

Ishii and Kumazawa. "Linear Constraints on Weight Representation for Generalized Learning of Multilayer Networks." Neural Computation, 2001. doi:10.1162/089976601317098556

Markdown

[Ishii and Kumazawa. "Linear Constraints on Weight Representation for Generalized Learning of Multilayer Networks." Neural Computation, 2001.](https://mlanthology.org/neco/2001/ishii2001neco-linear/) doi:10.1162/089976601317098556

BibTeX

@article{ishii2001neco-linear,
  title     = {{Linear Constraints on Weight Representation for Generalized Learning of Multilayer Networks}},
  author    = {Ishii, Masaki and Kumazawa, Itsuo},
  journal   = {Neural Computation},
  year      = {2001},
  pages     = {2851-2863},
  doi       = {10.1162/089976601317098556},
  volume    = {13},
  url       = {https://mlanthology.org/neco/2001/ishii2001neco-linear/}
}