Unsupervised Representation Learning by Sorting Sequences
Abstract
We present an unsupervised representation learning approach using videos without semantic labels. We leverage the temporal coherence as a supervisory signal by formulating representation learning as a sequence sorting task. We take temporally shuffled frames (i.e. in non-chronological order) as inputs and train a convolutional neural network to sort the shuffled sequences. Similar to comparison-based sorting algorithms, we propose to extract features from all frame pairs and aggregate them to predict the correct order. As sorting shuffled image sequence requires an understanding of the statistical temporal structure of images, training with such a proxy task allows us to learn rich and generalizable visual representation. We validate the effectiveness of the learned representation using our method as pre-training on high-level recognition problems. The experimental results show that our method compares favorably against state-of-the-art methods on action recognition, image classification and object detection tasks.
Cite
Text
Lee et al. "Unsupervised Representation Learning by Sorting Sequences." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.79Markdown
[Lee et al. "Unsupervised Representation Learning by Sorting Sequences." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/lee2017iccv-unsupervised/) doi:10.1109/ICCV.2017.79BibTeX
@inproceedings{lee2017iccv-unsupervised,
title = {{Unsupervised Representation Learning by Sorting Sequences}},
author = {Lee, Hsin-Ying and Huang, Jia-Bin and Singh, Maneesh and Yang, Ming-Hsuan},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.79},
url = {https://mlanthology.org/iccv/2017/lee2017iccv-unsupervised/}
}