Segmenting Object Affordances: Reproducibility and Sensitivity to Scale
Abstract
Visual affordance segmentation identifies image regions of an object an agent can interact with. Existing methods re-use and adapt learning-based architectures for semantic segmentation to the affordance segmentation task and evaluate on small-size datasets. However, experimental setups are often not reproducible, thus leading to unfair and inconsistent comparisons. In this work, we benchmark these methods under a reproducible setup on two single objects scenarios, tabletop without occlusions and hand-held containers, to facilitate future comparisons. We include a version of a recent architecture, Mask2Former, re-trained for affordance segmentation and show that this model is the best-performing on most testing sets of both scenarios. Our analysis shows that models are not robust to scale variations when object resolutions differ from those in the training set.
Cite
Text
Apicella et al. "Segmenting Object Affordances: Reproducibility and Sensitivity to Scale." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-92591-7_18Markdown
[Apicella et al. "Segmenting Object Affordances: Reproducibility and Sensitivity to Scale." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/apicella2024eccvw-segmenting/) doi:10.1007/978-3-031-92591-7_18BibTeX
@inproceedings{apicella2024eccvw-segmenting,
title = {{Segmenting Object Affordances: Reproducibility and Sensitivity to Scale}},
author = {Apicella, Tommaso and Xompero, Alessio and Gastaldo, Paolo and Cavallaro, Andrea},
booktitle = {European Conference on Computer Vision Workshops},
year = {2024},
pages = {286-304},
doi = {10.1007/978-3-031-92591-7_18},
url = {https://mlanthology.org/eccvw/2024/apicella2024eccvw-segmenting/}
}