Neural Rule Lists: Learning Discretizations, Rules, and Order in One Go
Abstract
Interpretable machine learning is essential in high-stakes domains like healthcare. Rule lists are a popular choice due to their transparency and accuracy, but learning them effectively remains a challenge. Existing methods require feature pre-discretization, constrain rule complexity or ordering, or struggle to scale. We present NeuRules, a novel end-to-end framework that overcomes these limitations. At its core, NeuRules transforms the inherently combinatorial task of rule list learning into a differentiable optimization problem, enabling gradient-based learning. It simultaneously discovers feature conditions, assembles them into conjunctive rules, and determines their order—without pre-processing or manual constraints. A key contribution here is a gradient shaping technique that steers learning toward sparse rules with strong predictive performance. To produce ordered lists, we introduce a differentiable relaxation that, through simulated annealing, converges to a strict rule list. Extensive experiments show that NeuRules consistently outperforms combinatorial and neural baselines on binary as well as multi-class classification tasks across a wide range of datasets.
Cite
Text
Xu et al. "Neural Rule Lists: Learning Discretizations, Rules, and Order in One Go." Advances in Neural Information Processing Systems, 2025.Markdown
[Xu et al. "Neural Rule Lists: Learning Discretizations, Rules, and Order in One Go." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xu2025neurips-neural/)BibTeX
@inproceedings{xu2025neurips-neural,
title = {{Neural Rule Lists: Learning Discretizations, Rules, and Order in One Go}},
author = {Xu, Sascha and Walter, Nils Philipp and Vreeken, Jilles},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/xu2025neurips-neural/}
}