Multispectral Contrastive Learning with Viewmaker Networks

Abstract

Contrastive learning methods have been applied to a range of domains and modalities by training models to identify similar "views" of data points. However, specialized scientific modalities pose a challenge for this paradigm, as identifying good views for each scientific instrument is complex and time-intensive. In this paper, we focus on applying contrastive learning approaches to a variety of remote sensing datasets. We show that Viewmaker networks, a recently proposed method for generating views without extensive domain knowledge, can produce useful views in this setting. We also present a Viewmaker variant called Divmaker, which achieves similar performance and does not require adversarial optimization. Applying both methods to four multispectral imaging problems, each with a different format, we find that Viewmaker and Divmaker can outperform cropping- and reflection-based methods for contrastive learning in every case when evaluated on downstream classification tasks. This provides additional evidence that domain-agnostic methods can empower contrastive learning to scale to real-world scientific domains. Open source code can be found at https://github.com/jbayrooti/divmaker.

Cite

Text

Bayrooti et al. "Multispectral Contrastive Learning with Viewmaker Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00050

Markdown

[Bayrooti et al. "Multispectral Contrastive Learning with Viewmaker Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/bayrooti2023cvprw-multispectral/) doi:10.1109/CVPRW59228.2023.00050

BibTeX

@inproceedings{bayrooti2023cvprw-multispectral,
  title     = {{Multispectral Contrastive Learning with Viewmaker Networks}},
  author    = {Bayrooti, Jasmine and Goodman, Noah D. and Tamkin, Alex},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {440-448},
  doi       = {10.1109/CVPRW59228.2023.00050},
  url       = {https://mlanthology.org/cvprw/2023/bayrooti2023cvprw-multispectral/}
}