Learning Visual Context by Comparison
Abstract
Finding diseases from an X-ray image is an important yet highly challenging task. Current methods for solving this task exploit various characteristics of the chest X-ray image, but one of the most important characteristics is still missing: the necessity of comparison between related regions in an image. In this paper, we present Attend-and-Compare Module (ACM) for capturing the difference between an object of interest and its corresponding context. We show that explicit difference modeling can be very helpful in tasks that require direct comparison between locations from afar. This module can be plugged into existing deep learning models. For evaluation, we apply our module to three chest X-ray recognition tasks and COCO object detection & segmentation tasks and observe consistent improvements across tasks. The code is available at https://github.com/mk-minchul/attend-and-compare.
Cite
Text
Kim et al. "Learning Visual Context by Comparison." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58558-7_34Markdown
[Kim et al. "Learning Visual Context by Comparison." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/kim2020eccv-learning/) doi:10.1007/978-3-030-58558-7_34BibTeX
@inproceedings{kim2020eccv-learning,
title = {{Learning Visual Context by Comparison}},
author = {Kim, Minchul and Park, Jongchan and Na, Seil and Park, Chang Min and Yoo, Donggeun},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58558-7_34},
url = {https://mlanthology.org/eccv/2020/kim2020eccv-learning/}
}