An Effective Learning Method for Max-Min Neural Networks
Abstract
Max and min operations have interesting properties that facilitate the exchange of information between the symbolic and real-valued domains. As such, neural networks that employ max-min activation functions have been a subject of interest in recent years. Since max-min functions are not strictly differentiable, we propose a mathematically sound learning method based on using Fourier convergence analysis of side-derivatives to derive a gradient descent technique for max-min error functions. This method is applied to a "typical" fuzzy-neural network model employing max-min activation functions. We show how this network can be trained to perform function approximation; its performance was found to be better than that of a conventional feedforward neural network. 1
Cite
Text
Teow and Loe. "An Effective Learning Method for Max-Min Neural Networks." International Joint Conference on Artificial Intelligence, 1997.Markdown
[Teow and Loe. "An Effective Learning Method for Max-Min Neural Networks." International Joint Conference on Artificial Intelligence, 1997.](https://mlanthology.org/ijcai/1997/teow1997ijcai-effective/)BibTeX
@inproceedings{teow1997ijcai-effective,
title = {{An Effective Learning Method for Max-Min Neural Networks}},
author = {Teow, Loo-Nin and Loe, Kia-Fock},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1997},
pages = {1134-1139},
url = {https://mlanthology.org/ijcai/1997/teow1997ijcai-effective/}
}