Resolving Label Uncertainty with Implicit Posterior Models
Abstract
We propose a method for jointly inferring labels across a collection of data samples, where each sample consists of an observation and a prior belief about the label. By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs. This formulation unifies various machine learning settings; the weak beliefs can come in the form of noisy or incomplete labels, likelihoods given by a different prediction mechanism on auxiliary input, or common-sense priors reflecting knowledge about the structure of the problem at hand. We demonstrate the proposed algorithms on diverse problems: classification with negative training examples, learning from rankings, weakly and self-supervised aerial imagery segmentation, co-segmentation of video frames, and coarsely supervised text classification.
Cite
Text
Rolf et al. "Resolving Label Uncertainty with Implicit Posterior Models." Uncertainty in Artificial Intelligence, 2022.Markdown
[Rolf et al. "Resolving Label Uncertainty with Implicit Posterior Models." Uncertainty in Artificial Intelligence, 2022.](https://mlanthology.org/uai/2022/rolf2022uai-resolving/)BibTeX
@inproceedings{rolf2022uai-resolving,
title = {{Resolving Label Uncertainty with Implicit Posterior Models}},
author = {Rolf, Esther and Malkin, Nikolay and Graikos, Alexandros and Jojic, Ana and Robinson, Caleb and Jojic, Nebojsa},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2022},
pages = {1707-1717},
volume = {180},
url = {https://mlanthology.org/uai/2022/rolf2022uai-resolving/}
}