Multi-Task Self-Supervised Visual Learning

Abstract

We investigate methods for combining multiple self-supervised tasks---i.e., supervised tasks where data can be collected without manual labeling---in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks---even via a naive multi-head architecture---always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.

Cite

Text

Doersch and Zisserman. "Multi-Task Self-Supervised Visual Learning." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.226

Markdown

[Doersch and Zisserman. "Multi-Task Self-Supervised Visual Learning." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/doersch2017iccv-multitask/) doi:10.1109/ICCV.2017.226

BibTeX

@inproceedings{doersch2017iccv-multitask,
  title     = {{Multi-Task Self-Supervised Visual Learning}},
  author    = {Doersch, Carl and Zisserman, Andrew},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.226},
  url       = {https://mlanthology.org/iccv/2017/doersch2017iccv-multitask/}
}