GDS: Gradient Descent Generation of Symbolic Classification Rules

Abstract

Imagine you have designed a neural network that successfully learns a complex classification task. What are the relevant input features the classifier relies on and how are these features combined to pro(cid:173) duce the classification decisions? There are applications where a deeper insight into the structure of an adaptive system and thus into the underlying classification problem may well be as important as the system's performance characteristics, e.g. in economics or medicine. GDSi is a backpropagation-based training scheme that produces networks transformable into an equivalent and concise set of IF-THEN rules. This is achieved by imposing penalty terms on the network parameters that adapt the network to the expressive power of this class of rules. Thus during training we simultaneously minimize classification and transformation error. Some real-world tasks demonstrate the viability of our approach.

Cite

Text

Blasig. "GDS: Gradient Descent Generation of Symbolic Classification Rules." Neural Information Processing Systems, 1993.

Markdown

[Blasig. "GDS: Gradient Descent Generation of Symbolic Classification Rules." Neural Information Processing Systems, 1993.](https://mlanthology.org/neurips/1993/blasig1993neurips-gds/)

BibTeX

@inproceedings{blasig1993neurips-gds,
  title     = {{GDS: Gradient Descent Generation of Symbolic Classification Rules}},
  author    = {Blasig, Reinhard},
  booktitle = {Neural Information Processing Systems},
  year      = {1993},
  pages     = {1093-1100},
  url       = {https://mlanthology.org/neurips/1993/blasig1993neurips-gds/}
}