Optical Flow with Semantic Segmentation and Localized Layers

Abstract

Existing optical flow methods make generic, spatially homogeneous, assumptions about the spatial structure of the flow. In reality, optical flow varies across an image depending on object class. Simply put, different objects move differently. Here we exploit recent advances in static semantic scene segmentation to segment the image into objects of different types. We define different models of image motion in these regions depending on the type of object. For example, the road motion with homographies, vegetation with spatially smooth flow, and independently moving objects like cars and planes with affine+deviations. We then pose the flow estimation problem using a novel formulation of localized layers, which addresses limitations of traditional layered models for dealing with complex scene motion. Our semantic flow method achieves the lowest error of any published method in the KITTI-2015 flow benchmark and produces qualitatively better flow and segmentation than recent top methods on a wide range of natural videos.

Cite

Text

Sevilla-Lara et al. "Optical Flow with Semantic Segmentation and Localized Layers." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.422

Markdown

[Sevilla-Lara et al. "Optical Flow with Semantic Segmentation and Localized Layers." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/sevillalara2016cvpr-optical/) doi:10.1109/CVPR.2016.422

BibTeX

@inproceedings{sevillalara2016cvpr-optical,
  title     = {{Optical Flow with Semantic Segmentation and Localized Layers}},
  author    = {Sevilla-Lara, Laura and Sun, Deqing and Jampani, Varun and Black, Michael J.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  doi       = {10.1109/CVPR.2016.422},
  url       = {https://mlanthology.org/cvpr/2016/sevillalara2016cvpr-optical/}
}