Augment with Care: Contrastive Learning for Combinatorial Problems
Abstract
Supervised learning can improve the design of state-of-the-art solvers for combinatorial problems, but labelling large numbers of combinatorial instances is often impractical due to exponential worst-case complexity. Inspired by the recent success of contrastive pre-training for images, we conduct a scientific study of the effect of augmentation design on contrastive pre-training for the Boolean satisfiability problem. While typical graph contrastive pre-training uses label-agnostic augmentations, our key insight is that many combinatorial problems have well-studied invariances, which allow for the design of label-preserving augmentations. We find that label-preserving augmentations are critical for the success of contrastive pre-training. We show that our representations are able to achieve comparable test accuracy to fully-supervised learning while using only 1% of the labels. We also demonstrate that our representations are more transferable to larger problems from unseen domains. Our code is available at https://github.com/h4duan/contrastive-sat.
Cite
Text
Duan et al. "Augment with Care: Contrastive Learning for Combinatorial Problems." International Conference on Machine Learning, 2022.Markdown
[Duan et al. "Augment with Care: Contrastive Learning for Combinatorial Problems." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/duan2022icml-augment/)BibTeX
@inproceedings{duan2022icml-augment,
title = {{Augment with Care: Contrastive Learning for Combinatorial Problems}},
author = {Duan, Haonan and Vaezipoor, Pashootan and Paulus, Max B and Ruan, Yangjun and Maddison, Chris},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {5627-5642},
volume = {162},
url = {https://mlanthology.org/icml/2022/duan2022icml-augment/}
}