Active Visual Exploration Based on Attention-mAP Entropy

Abstract

Active visual exploration addresses the issue of limited sensor capabilities in real-world scenarios, where successive observations are actively chosen based on the environment. To tackle this problem, we introduce a new technique called Attention-Map Entropy (AME). It leverages the internal uncertainty of the transformer-based model to determine the most informative observations. In contrast to existing solutions, it does not require additional loss components, which simplifies the training. Through experiments, which also mimic retina-like sensors, we show that such simplified training significantly improves the performance of reconstruction, segmentation and classification on publicly available datasets.

Cite

Text

Pardyl et al. "Active Visual Exploration Based on Attention-mAP Entropy." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/145

Markdown

[Pardyl et al. "Active Visual Exploration Based on Attention-mAP Entropy." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/pardyl2023ijcai-active/) doi:10.24963/IJCAI.2023/145

BibTeX

@inproceedings{pardyl2023ijcai-active,
  title     = {{Active Visual Exploration Based on Attention-mAP Entropy}},
  author    = {Pardyl, Adam and Rypesc, Grzegorz and Kurzejamski, Grzegorz and Zielinski, Bartosz and Trzcinski, Tomasz},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {1303-1311},
  doi       = {10.24963/IJCAI.2023/145},
  url       = {https://mlanthology.org/ijcai/2023/pardyl2023ijcai-active/}
}