PenDer: Incorporating Shape Constraints via Penalized Derivatives
Abstract
When deploying machine learning models in the real-world, system designers may wish that models exhibit certain shape behavior, i.e., model outputs follow a particular shape with respect to input features. Trends such as monotonicity, convexity, diminishing or accelerating returns are some of the desired shapes. Presence of these shapes makes the model more interpretable for the system designers, and adequately fair for the customers. We notice that many such common shapes are related to derivatives, and propose a new approach, PenDer (Penalizing Derivatives), which incorporates these shape constraints by penalizing the derivatives. We further present an Augmented Lagrangian Method (ALM) to solve this constrained optimization problem. Experiments on three real-world datasets illustrate that even though both PenDer and state-of-the-art Lattice models achieve similar conformance to shape, PenDer captures better sensitivity of prediction with respect to intended features. We also demonstrate that PenDer achieves better test performance than Lattice while enforcing more desirable shape behavior.
Cite
Text
Gupta et al. "PenDer: Incorporating Shape Constraints via Penalized Derivatives." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I13.17373Markdown
[Gupta et al. "PenDer: Incorporating Shape Constraints via Penalized Derivatives." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/gupta2021aaai-pender/) doi:10.1609/AAAI.V35I13.17373BibTeX
@inproceedings{gupta2021aaai-pender,
title = {{PenDer: Incorporating Shape Constraints via Penalized Derivatives}},
author = {Gupta, Akhil and Marla, Lavanya and Sun, Ruoyu and Shukla, Naman and Kolbeinsson, Arinbjörn},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {11536-11544},
doi = {10.1609/AAAI.V35I13.17373},
url = {https://mlanthology.org/aaai/2021/gupta2021aaai-pender/}
}