Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness

Abstract

Recently, convolutional networks (convnets) have proven useful for predicting optical flow. Much of this success is predicated on the availability of large datasets that require expensive and involved data acquisition and laborious labeling. To bypass these challenges, we propose an unsupervised approach (i.e., without leveraging groundtruth flow) to train a convnet end-to-end for predicting optical flow between two images. We use a loss function that combines a data term that measures photometric constancy over time with a spatial term that models the expected variation of flow across the image. Together these losses form a proxy measure for losses based on the groundtruth flow. Empirically, we show that a strong convnet baseline trained with the proposed unsupervised approach outperforms the same network trained with supervision on the KITTI dataset.

Cite

Text

Yu et al. "Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-49409-8_1

Markdown

[Yu et al. "Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/yu2016eccv-back/) doi:10.1007/978-3-319-49409-8_1

BibTeX

@inproceedings{yu2016eccv-back,
  title     = {{Back to Basics: Unsupervised Learning of Optical Flow via Brightness Constancy and Motion Smoothness}},
  author    = {Yu, Jason J. and Harley, Adam W. and Derpanis, Konstantinos G.},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {3-10},
  doi       = {10.1007/978-3-319-49409-8_1},
  url       = {https://mlanthology.org/eccv/2016/yu2016eccv-back/}
}