Improving Socially-Aware Multi-Channel Human Emotion Prediction for Robot Navigation
Abstract
We present a real-time algorithm for emotion-aware navigation of a robot among pedestrians. Our approach estimates time-varying emotional behaviors of pedestrians from their faces and trajectories using a combination of Bayesian- inference, CNN-based learning, and the PAD (Pleasure-Arousal- Dominance) model from psychology. These PAD characteristics are used for long-term path prediction and generating proxemic constraints for each pedestrian. We use a multi-channel model to classify pedestrian characteristics into four emotion categories (happy, sad, angry, neutral). In our validation results, we observe an emotion detection accuracy of 85.33%. We formulate emotion-based proxemic constraints to perform socially-aware robot navigation in low- to medium-density environments. We demonstrate the benefits of our algorithm in simulated environments with tens of pedestrians as well as in a real-world setting with Pepper, a social humanoid robot.
Cite
Text
Bera et al. "Improving Socially-Aware Multi-Channel Human Emotion Prediction for Robot Navigation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.Markdown
[Bera et al. "Improving Socially-Aware Multi-Channel Human Emotion Prediction for Robot Navigation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/bera2019cvprw-improving/)BibTeX
@inproceedings{bera2019cvprw-improving,
title = {{Improving Socially-Aware Multi-Channel Human Emotion Prediction for Robot Navigation}},
author = {Bera, Aniket and Randhavane, Tanmay and Manocha, Dinesh},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {21-27},
url = {https://mlanthology.org/cvprw/2019/bera2019cvprw-improving/}
}