De-Biasing Weakly Supervised Learning by Regularizing Prediction Entropy

Abstract

We explore the effect of regularizing prediction entropy in a weakly supervised setting with inexact class labels. When underlying data distributions are biased toward a specific subclass, we hypothesize that entropy regularization can be used to bootstrap a training set that mitigates this bias. We conduct experiments over multiple datasets under supervision of an oracle and in a semi-supervised setting finding substantial reductions in training set bias capable of decreasing test error rate. These findings suggest entropy regularization as a promising approach to de-biasing weakly supervised learning systems.

Cite

Text

Wyatte. "De-Biasing Weakly Supervised Learning by Regularizing Prediction Entropy." ICLR 2019 Workshops: LLD, 2019.

Markdown

[Wyatte. "De-Biasing Weakly Supervised Learning by Regularizing Prediction Entropy." ICLR 2019 Workshops: LLD, 2019.](https://mlanthology.org/iclrw/2019/wyatte2019iclrw-debiasing/)

BibTeX

@inproceedings{wyatte2019iclrw-debiasing,
  title     = {{De-Biasing Weakly Supervised Learning by Regularizing Prediction Entropy}},
  author    = {Wyatte, Dean},
  booktitle = {ICLR 2019 Workshops: LLD},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/wyatte2019iclrw-debiasing/}
}