Multi-Type Activity Recognition from a Robot's Viewpoint
Abstract
The literature in computer vision is rich of works where different types of activities -- single actions, two persons interactions or ego-centric activities, to name a few -- have been analyzed. However, traditional methods treat such types of activities separately, while in real settings detecting and recognizing different types of activities simultaneously is necessary. We first design a new unified descriptor, called Relation History Image (RHI), which can be extracted from all the activity types we are interested in. We then formulate an optimization procedure to detect and recognize activities of different types. We assess our approach on a new dataset recorded from a robot-centric perspective as well as on publicly available datasets, and evaluate its quality compared to multiple baselines.
Cite
Text
Gori et al. "Multi-Type Activity Recognition from a Robot's Viewpoint." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/680Markdown
[Gori et al. "Multi-Type Activity Recognition from a Robot's Viewpoint." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/gori2017ijcai-multi/) doi:10.24963/IJCAI.2017/680BibTeX
@inproceedings{gori2017ijcai-multi,
title = {{Multi-Type Activity Recognition from a Robot's Viewpoint}},
author = {Gori, Ilaria and Aggarwal, J. K. and Matthies, Larry H. and Ryoo, Michael S.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {4849-4853},
doi = {10.24963/IJCAI.2017/680},
url = {https://mlanthology.org/ijcai/2017/gori2017ijcai-multi/}
}