The Multiverse Loss for Robust Transfer Learning
Abstract
Deep learning techniques are renowned for supporting effective transfer learning. However, as we demonstrate, the transferred representations support only a few modes of separation and much of its dimensionality is unutilized. In this work we suggest to learn, in the source domain, multiple orthogonal classifiers. We prove that this leads to a reduced rank representation, which however supports more discriminative directions. Interestingly, the softmax probabilities produced by the multiple classifiers are likely to be identical. Extensive experimental results further demonstrate the effectiveness of our method.
Cite
Text
Littwin and Wolf. "The Multiverse Loss for Robust Transfer Learning." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.429Markdown
[Littwin and Wolf. "The Multiverse Loss for Robust Transfer Learning." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/littwin2016cvpr-multiverse/) doi:10.1109/CVPR.2016.429BibTeX
@inproceedings{littwin2016cvpr-multiverse,
title = {{The Multiverse Loss for Robust Transfer Learning}},
author = {Littwin, Etai and Wolf, Lior},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.429},
url = {https://mlanthology.org/cvpr/2016/littwin2016cvpr-multiverse/}
}