Inverse Problems Leveraging Pre-Trained Contrastive Representations
Abstract
We study a new family of inverse problems for recovering representations of corrupted data. We assume access to a pre-trained representation learning network R(x) that operates on clean images, like CLIP. The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A. We propose a supervised inversion method that uses a contrastive objective to obtain excellent representations for highly corrupted images. Using a linear probe on our robust representations, we achieve a higher accuracy than end-to-end supervised baselines when classifying images with various types of distortions, including blurring, additive noise, and random pixel masking. We evaluate on a subset of ImageNet and observe that our method is robust to varying levels of distortion. Our method outperforms end-to-end baselines even with a fraction of the labeled data in a wide range of forward operators.
Cite
Text
Ravula et al. "Inverse Problems Leveraging Pre-Trained Contrastive Representations." Neural Information Processing Systems, 2021.Markdown
[Ravula et al. "Inverse Problems Leveraging Pre-Trained Contrastive Representations." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/ravula2021neurips-inverse/)BibTeX
@inproceedings{ravula2021neurips-inverse,
title = {{Inverse Problems Leveraging Pre-Trained Contrastive Representations}},
author = {Ravula, Sriram and Smyrnis, Georgios and Jordan, Matt and Dimakis, Alexandros G},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/ravula2021neurips-inverse/}
}