Learning Social Affordance for Human-Robot Interaction
Abstract
In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: our approach learns structural representations of human-human (and human-object-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions. PDF
Cite
Text
Shu et al. "Learning Social Affordance for Human-Robot Interaction." International Joint Conference on Artificial Intelligence, 2016.Markdown
[Shu et al. "Learning Social Affordance for Human-Robot Interaction." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/shu2016ijcai-learning/)BibTeX
@inproceedings{shu2016ijcai-learning,
title = {{Learning Social Affordance for Human-Robot Interaction}},
author = {Shu, Tianmin and Ryoo, Michael S. and Zhu, Song-Chun},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2016},
pages = {3454-3461},
url = {https://mlanthology.org/ijcai/2016/shu2016ijcai-learning/}
}