Q-Learning and Redundancy Reduction in Classifier Systems with Internal State
Abstract
The Q-Credit Assignment (QCA) is a method, based on Q-learning, for allocating credit to rules in Classifier Systems with internal state. It is more powerful than other proposed methods, because it correctly evaluates shared rules, but it has a large computational cost, due to the Multi-Layer Perceptron (MLP) that stores the evaluation function. We present a method for reducing this cost by reducing redundancy in the input space of the MLP through feature extraction. The experimental results show that the QCA with Redundancy Reduction (QCA-RR) preserves the advantages of the QCA while it significantly reduces both the learning time and the evaluation time after learning.
Cite
Text
Giani et al. "Q-Learning and Redundancy Reduction in Classifier Systems with Internal State." European Conference on Machine Learning, 1998. doi:10.1007/BFB0026707Markdown
[Giani et al. "Q-Learning and Redundancy Reduction in Classifier Systems with Internal State." European Conference on Machine Learning, 1998.](https://mlanthology.org/ecmlpkdd/1998/giani1998ecml-qlearning/) doi:10.1007/BFB0026707BibTeX
@inproceedings{giani1998ecml-qlearning,
title = {{Q-Learning and Redundancy Reduction in Classifier Systems with Internal State}},
author = {Giani, Antonella and Sticca, Andrea and Baiardi, Fabrizio and Starita, Antonina},
booktitle = {European Conference on Machine Learning},
year = {1998},
pages = {364-369},
doi = {10.1007/BFB0026707},
url = {https://mlanthology.org/ecmlpkdd/1998/giani1998ecml-qlearning/}
}