Lookine: Let the Blind Hear a Smile
Abstract
It is believed that nonverbal visual information including facial expressions, facial micro-actions and head movements plays a significant role in fundamental social communication. Unfortunately it is regretful that the blind can not achieve such necessary information. Therefore, we propose a social assistant system, Lookine, to help them to go beyond this limitation. For Lookine, we apply the novel techniques including facial expression recognition, facial action recognition and head pose estimation, and obey barrier-free principles in our design. In experiments, the algorithm evaluation and user study prove that our system has promising accuracy, good real-time performance, and great user experience.
Cite
Text
Bu et al. "Lookine: Let the Blind Hear a Smile." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11377Markdown
[Bu et al. "Lookine: Let the Blind Hear a Smile." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/bu2018aaai-lookine/) doi:10.1609/AAAI.V32I1.11377BibTeX
@inproceedings{bu2018aaai-lookine,
title = {{Lookine: Let the Blind Hear a Smile}},
author = {Bu, Yaohua and Jia, Jia and Tang, Yuhan and Zang, Xuan and Gao, Tianyu},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {8196-8197},
doi = {10.1609/AAAI.V32I1.11377},
url = {https://mlanthology.org/aaai/2018/bu2018aaai-lookine/}
}