Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction
Abstract
We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.
Cite
Text
Zhang et al. "Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.76Markdown
[Zhang et al. "Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/zhang2017cvpr-splitbrain/) doi:10.1109/CVPR.2017.76BibTeX
@inproceedings{zhang2017cvpr-splitbrain,
title = {{Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction}},
author = {Zhang, Richard and Isola, Phillip and Efros, Alexei A.},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.76},
url = {https://mlanthology.org/cvpr/2017/zhang2017cvpr-splitbrain/}
}