Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics

Abstract

Self-training based semi-supervised learning algorithms have enabled the learning of highly accurate deep neural networks, using only a fraction of labeled data. However, the majority of work on self-training has focused on the objective of improving accuracy whereas practical machine learning systems can have complex goals (e.g. maximizing the minimum of recall across classes, etc.) that are non-decomposable in nature. In this work, we introduce the Cost-Sensitive Self-Training (CSST) framework which generalizes the self-training-based methods for optimizing non-decomposable metrics. We prove that our framework can better optimize the desired non-decomposable metric utilizing unlabeled data, under similar data distribution assumptions made for the analysis of self-training. Using the proposed CSST framework, we obtain practical self-training methods (for both vision and NLP tasks) for optimizing different non-decomposable metrics using deep neural networks. Our results demonstrate that CSST achieves an improvement over the state-of-the-art in majority of the cases across datasets and objectives.

Cite

Text

Rangwani et al. "Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics." Neural Information Processing Systems, 2022.

Markdown

[Rangwani et al. "Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/rangwani2022neurips-costsensitive/)

BibTeX

@inproceedings{rangwani2022neurips-costsensitive,
  title     = {{Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics}},
  author    = {Rangwani, Harsh and Ramasubramanian, Shrinivas and Takemori, Sho and Takashi, Kato and Umeda, Yuhei and R, Venkatesh Babu},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/rangwani2022neurips-costsensitive/}
}