Mining Self-Similarity: Label Super-Resolution with Epitomic Representations

Abstract

We show that simple patch-based models, such as epitomes (Jojic et al., 2003), can have superior performance to the current state of the art in semantic segmentation and label super-resolution, which uses deep convolutional neural networks. We derive a new training algorithm for epitomes which allows, for the first time, learning from very large data sets and derive a label super-resolution algorithm as a statistical inference algorithm over epitomic representations. We illustrate our methods on land cover mapping and medical image analysis tasks.

Cite

Text

Malkin et al. "Mining Self-Similarity: Label Super-Resolution with Epitomic Representations." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58574-7_32

Markdown

[Malkin et al. "Mining Self-Similarity: Label Super-Resolution with Epitomic Representations." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/malkin2020eccv-mining/) doi:10.1007/978-3-030-58574-7_32

BibTeX

@inproceedings{malkin2020eccv-mining,
  title     = {{Mining Self-Similarity: Label Super-Resolution with Epitomic Representations}},
  author    = {Malkin, Nikolay and Ortiz, Anthony and Jojic, Nebojsa},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58574-7_32},
  url       = {https://mlanthology.org/eccv/2020/malkin2020eccv-mining/}
}