Discriminant Component Pruning: Regularization and Interpretation of Multi-Layered Back-Propagation Networks

Abstract

Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (DCP), a method of pruning matrices of summed contributions between layers of a neural network. Attempting to interpret the underlying functions learned by the network can be aided by pruning the network. Generalization performance should be maintained at its optimal level following pruning. We demonstrate DCP's effectiveness at maintaining generalization performance, applicability to a wider range of problems, and the usefulness of such pruning for network interpretation. Possible enhancements are discussed for the identification of the optimal reduced rank and inclusion of nonlinear neural activation functions in the pruning algorithm.

Cite

Text

Koene and Takane. "Discriminant Component Pruning: Regularization and Interpretation of Multi-Layered Back-Propagation Networks." Neural Computation, 1999. doi:10.1162/089976699300016665

Markdown

[Koene and Takane. "Discriminant Component Pruning: Regularization and Interpretation of Multi-Layered Back-Propagation Networks." Neural Computation, 1999.](https://mlanthology.org/neco/1999/koene1999neco-discriminant/) doi:10.1162/089976699300016665

BibTeX

@article{koene1999neco-discriminant,
  title     = {{Discriminant Component Pruning: Regularization and Interpretation of Multi-Layered Back-Propagation Networks}},
  author    = {Koene, Randal A. and Takane, Yoshio},
  journal   = {Neural Computation},
  year      = {1999},
  pages     = {783-802},
  doi       = {10.1162/089976699300016665},
  volume    = {11},
  url       = {https://mlanthology.org/neco/1999/koene1999neco-discriminant/}
}