Consistent Plug-in Classifiers for Complex Objectives and Constraints

Abstract

We present a statistically consistent algorithm for constrained classification problems where the objective (e.g. F-measure, G-mean) and the constraints (e.g. demographic parity, coverage) are defined by general functions of the confusion matrix. The key idea is to reduce the problem into a sequence of plug-in classifier learning problems, which is done by formulating an optimization problem over the intersection of the set of achievable confusion matrices and the set of feasible matrices. For objective and constraints that are convex functions of the confusion matrix, our algorithm requires $O(1/\epsilon^2)$ calls to the plug-in routine, which improves on the $O(1/\epsilon^3)$ rate achieved by Narasimhan (2018). We demonstrate empirically that our algorithm performs at least as well as the state-of-the-art methods for these problems.

Cite

Text

Tavker et al. "Consistent Plug-in Classifiers for Complex Objectives and Constraints." Neural Information Processing Systems, 2020.

Markdown

[Tavker et al. "Consistent Plug-in Classifiers for Complex Objectives and Constraints." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/tavker2020neurips-consistent/)

BibTeX

@inproceedings{tavker2020neurips-consistent,
  title     = {{Consistent Plug-in Classifiers for Complex Objectives and Constraints}},
  author    = {Tavker, Shiv Kumar and Ramaswamy, Harish Guruprasad and Narasimhan, Harikrishna},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/tavker2020neurips-consistent/}
}