A General Theoretical Framework for Learning Smallest Interpretable Models
Abstract
We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By showing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.
Cite
Text
Ordyniak et al. "A General Theoretical Framework for Learning Smallest Interpretable Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I9.28937Markdown
[Ordyniak et al. "A General Theoretical Framework for Learning Smallest Interpretable Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/ordyniak2024aaai-general/) doi:10.1609/AAAI.V38I9.28937BibTeX
@inproceedings{ordyniak2024aaai-general,
title = {{A General Theoretical Framework for Learning Smallest Interpretable Models}},
author = {Ordyniak, Sebastian and Paesani, Giacomo and Rychlicki, Mateusz and Szeider, Stefan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {10662-10669},
doi = {10.1609/AAAI.V38I9.28937},
url = {https://mlanthology.org/aaai/2024/ordyniak2024aaai-general/}
}