Rejection via Learning Density Ratios

Abstract

Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions. The predominant approach is to alter the supervised learning pipeline by augmenting typical loss functions, letting model rejection incur a lower loss than an incorrect prediction.Instead, we propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.This can be formalized via the optimization of a loss's risk with a $ \phi$-divergence regularization term.Through this idealized distribution, a rejection decision can be made by utilizing the density ratio between this distribution and the data distribution.We focus on the setting where our $ \phi $-divergences are specified by the family of $ \alpha $-divergence.Our framework is tested empirically over clean and noisy datasets.

Cite

Text

Soen et al. "Rejection via Learning Density Ratios." Neural Information Processing Systems, 2024. doi:10.52202/079017-1643

Markdown

[Soen et al. "Rejection via Learning Density Ratios." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/soen2024neurips-rejection/) doi:10.52202/079017-1643

BibTeX

@inproceedings{soen2024neurips-rejection,
  title     = {{Rejection via Learning Density Ratios}},
  author    = {Soen, Alexander and Husain, Hisham and Schulz, Philip and Nguyen, Vu},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1643},
  url       = {https://mlanthology.org/neurips/2024/soen2024neurips-rejection/}
}