Made to Order: Discovering Monotonic Temporal Changes via Self-Supervised Video Ordering

Abstract

Our objective is to discover and localize monotonic temporal changes in a sequence of images. To achieve this, we exploit a simple proxy task of ordering a shuffled image sequence, with ‘time’ serving as a supervisory signal, since only changes that are monotonic with time can give rise to the correct ordering. We also introduce a transformer-based model for ordering of image sequences of arbitrary length with built-in attribution maps. After training, the model successfully discovers and localizes monotonic changes while ignoring cyclic and stochastic ones. We demonstrate applications of the model in multiple domains covering different scene and object types, discovering both object-level and environmental changes in unseen sequences. We also demonstrate that the attention-based attribution maps function as effective prompts for segmenting the changing regions, and that the learned representations can be used for downstream applications. Finally, we show that the model achieves the state-of-the-art on standard benchmarks for image ordering.

Cite

Text

Yang et al. "Made to Order: Discovering Monotonic Temporal Changes via Self-Supervised Video Ordering." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72904-1_16

Markdown

[Yang et al. "Made to Order: Discovering Monotonic Temporal Changes via Self-Supervised Video Ordering." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/yang2024eccv-made/) doi:10.1007/978-3-031-72904-1_16

BibTeX

@inproceedings{yang2024eccv-made,
  title     = {{Made to Order: Discovering Monotonic Temporal Changes via Self-Supervised Video Ordering}},
  author    = {Yang, Charig and Xie, Weidi and Zisserman, Andrew},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72904-1_16},
  url       = {https://mlanthology.org/eccv/2024/yang2024eccv-made/}
}