Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals

Abstract

We show that many machine learning goals can be expressed as “rate constraints” on a model's predictions. We study the problem of training non-convex models subject to these rate constraints (or other non-convex or non-differentiable constraints). In the non-convex setting, the standard approach of Lagrange multipliers may fail. Furthermore, if the constraints are non-differentiable, then one cannot optimize the Lagrangian with gradient-based methods. To solve these issues, we introduce a new “proxy-Lagrangian” formulation. This leads to an algorithm that, assuming access to an optimization oracle, produces a stochastic classifier by playing a two-player non-zero-sum game solving for what we call a semi-coarse correlated equilibrium, which in turn corresponds to an approximately optimal and feasible solution to the constrained optimization problem. We then give a procedure that shrinks the randomized solution down to a mixture of at most $m+1$ deterministic solutions, given $m$ constraints. This culminates in a procedure that can solve non-convex constrained optimization problems with possibly non-differentiable and non-convex constraints, and enjoys theoretical guarantees. We provide extensive experimental results covering a broad range of policy goals, including various fairness metrics, accuracy, coverage, recall, and churn.

Cite

Text

Cotter et al. "Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals." Journal of Machine Learning Research, 2019.

Markdown

[Cotter et al. "Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals." Journal of Machine Learning Research, 2019.](https://mlanthology.org/jmlr/2019/cotter2019jmlr-optimization/)

BibTeX

@article{cotter2019jmlr-optimization,
  title     = {{Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals}},
  author    = {Cotter, Andrew and Jiang, Heinrich and Gupta, Maya and Wang, Serena and Narayan, Taman and You, Seungil and Sridharan, Karthik},
  journal   = {Journal of Machine Learning Research},
  year      = {2019},
  pages     = {1-59},
  volume    = {20},
  url       = {https://mlanthology.org/jmlr/2019/cotter2019jmlr-optimization/}
}