One-Shot Transfer of Affordance Regions? AffCorrs!
Abstract
In this work, we tackle one-shot visual search of object parts. Given a single reference image of an object with annotated affordance regions, we segment semantically corresponding parts within a target scene. We propose AffCorrs, an unsupervised model that combines the properties of pre-trained DINO-ViT’s image descriptors and cyclic correspondences. We use AffCorrs to find corresponding affordances both for intra- and inter-class one-shot part segmentation. This task is more difficult than supervised alternatives, but enables future work such as learning affordances via imitation and assisted teleoperation.
Cite
Text
Hadjivelichkov et al. "One-Shot Transfer of Affordance Regions? AffCorrs!." Conference on Robot Learning, 2022.Markdown
[Hadjivelichkov et al. "One-Shot Transfer of Affordance Regions? AffCorrs!." Conference on Robot Learning, 2022.](https://mlanthology.org/corl/2022/hadjivelichkov2022corl-oneshot/)BibTeX
@inproceedings{hadjivelichkov2022corl-oneshot,
title = {{One-Shot Transfer of Affordance Regions? AffCorrs!}},
author = {Hadjivelichkov, Denis and Zwane, Sicelukwanda and Agapito, Lourdes and Deisenroth, Marc Peter and Kanoulas, Dimitrios},
booktitle = {Conference on Robot Learning},
year = {2022},
pages = {550-560},
volume = {205},
url = {https://mlanthology.org/corl/2022/hadjivelichkov2022corl-oneshot/}
}