Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

Abstract

The lack of interpretability remains a key barrier to the adoption of deep models in many applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their class-probability predictions have high accuracy while being closely modeled by decision trees with few nodes. Using intuitive toy examples as well as medical tasks for treating sepsis and HIV, we demonstrate that this new tree regularization yields models that are easier for humans to simulate than simpler L1 or L2 penalties without sacrificing predictive power.

Cite

Text

Wu et al. "Beyond Sparsity: Tree Regularization of Deep Models for Interpretability." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11501

Markdown

[Wu et al. "Beyond Sparsity: Tree Regularization of Deep Models for Interpretability." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/wu2018aaai-beyond/) doi:10.1609/AAAI.V32I1.11501

BibTeX

@inproceedings{wu2018aaai-beyond,
  title     = {{Beyond Sparsity: Tree Regularization of Deep Models for Interpretability}},
  author    = {Wu, Mike and Hughes, Michael C. and Parbhoo, Sonali and Zazzi, Maurizio and Roth, Volker and Doshi-Velez, Finale},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {1670-1678},
  doi       = {10.1609/AAAI.V32I1.11501},
  url       = {https://mlanthology.org/aaai/2018/wu2018aaai-beyond/}
}