Evaluating and Comparing Classifiers: Complexity Measures
Abstract
Relevant literature on Kolmogorov complexity measures and on trade-offs of classifier accuracy for reduced complexity is reviewed, seeking a pragmatic methodology for the practising applications analyst. Significant findings are that: (1) An accuracy/complexity trade-off is desirable; (2) Combined measures of accuracy/complexity are not practical due to difficulties encoding constraint satisfication, lack of sampling statistics and suitable tests of the null hypothesis, and practical difficulties of encoding complex functions and encoding across families of classifiers; (3) Therefore, a generalized version of the CART [5] 1-SE rule is recommended; (4) Kolmogorov complexity is not practically computable (see (2)); and, therefore, (6) Simply measuring response times on a target environment is the recommended measure of complexity.
Cite
Text
Martin. "Evaluating and Comparing Classifiers: Complexity Measures." Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, 1995.Markdown
[Martin. "Evaluating and Comparing Classifiers: Complexity Measures." Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, 1995.](https://mlanthology.org/aistats/1995/martin1995aistats-evaluating/)BibTeX
@inproceedings{martin1995aistats-evaluating,
title = {{Evaluating and Comparing Classifiers: Complexity Measures}},
author = {Martin, J. Kent},
booktitle = {Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics},
year = {1995},
pages = {372-378},
volume = {R0},
url = {https://mlanthology.org/aistats/1995/martin1995aistats-evaluating/}
}