Bayesian Metric Learning for Uncertainty Quantification in Image Retrieval

Abstract

We propose a Bayesian encoder for metric learning. Rather than relying on neural amortization as done in prior works, we learn a distribution over the network weights with the Laplace Approximation. We first prove that the contrastive loss is a negative log-likelihood on the spherical space. We propose three methods that ensure a positive definite covariance matrix. Lastly, we present a novel decomposition of the Generalized Gauss-Newton approximation. Empirically, we show that our Laplacian Metric Learner (LAM) yields well-calibrated uncertainties, reliably detects out-of-distribution examples, and has state-of-the-art predictive performance.

Cite

Text

Warburg et al. "Bayesian Metric Learning for Uncertainty Quantification in Image Retrieval." Neural Information Processing Systems, 2023.

Markdown

[Warburg et al. "Bayesian Metric Learning for Uncertainty Quantification in Image Retrieval." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/warburg2023neurips-bayesian/)

BibTeX

@inproceedings{warburg2023neurips-bayesian,
  title     = {{Bayesian Metric Learning for Uncertainty Quantification in Image Retrieval}},
  author    = {Warburg, Frederik and Miani, Marco and Brack, Silas and Hauberg, Søren},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/warburg2023neurips-bayesian/}
}