Generalized Additive Models via Direct Optimization of Regularized Decision Stump Forests
Abstract
We explore ensembles of axis-aligned decision stumps, which can be viewed as a generalized additive model (GAM). In this model, stumps utilizing the same feature are grouped to form a shape function for that feature. Instead of relying on boosting or bagging, we employ alternating optimization to learn a fixed-size stump forest. We optimize the parameters of each stump exactly through enumeration, given the other stumps are fixed. For fixed stump splits, the leaf values are optimized jointly by solving a convex problem. To address the overfitting issue inherent in naive optimization of stump forests, we propose effective regularization techniques. Our regularized stump forests achieve accuracy comparable to state-of-the-art GAM methods while using fewer parameters. This work is the first to successfully learn stump forests without employing traditional ensembling techniques like bagging or boosting.
Cite
Text
Gabidolla and Carreira-Perpiñán. "Generalized Additive Models via Direct Optimization of Regularized Decision Stump Forests." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Gabidolla and Carreira-Perpiñán. "Generalized Additive Models via Direct Optimization of Regularized Decision Stump Forests." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/gabidolla2025icml-generalized/)BibTeX
@inproceedings{gabidolla2025icml-generalized,
title = {{Generalized Additive Models via Direct Optimization of Regularized Decision Stump Forests}},
author = {Gabidolla, Magzhan and Carreira-Perpiñán, Miguel Á.},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {18047-18061},
volume = {267},
url = {https://mlanthology.org/icml/2025/gabidolla2025icml-generalized/}
}