Music-to-Facial Expressions: Emotion-Based Music Visualization for the Hearing Impaired

Abstract

While music is made to convey messages and emotions, auditory music is not equally accessible to everyone. Music visualization is a common approach to augment the listening experiences of the hearing users and to provide music experiences for the hearing-impaired. In this paper, we present a music visualization system that can turn the input of a piece of music into a series of facial expressions representative of the continuously changing sentiments in the music. The resulting facial expressions, recorded as action units, can later animate a static virtual avatar to be emotive synchronously with the music.

Cite

Text

Wang et al. "Music-to-Facial Expressions: Emotion-Based Music Visualization for the Hearing Impaired." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26912

Markdown

[Wang et al. "Music-to-Facial Expressions: Emotion-Based Music Visualization for the Hearing Impaired." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/wang2023aaai-music/) doi:10.1609/AAAI.V37I13.26912

BibTeX

@inproceedings{wang2023aaai-music,
  title     = {{Music-to-Facial Expressions: Emotion-Based Music Visualization for the Hearing Impaired}},
  author    = {Wang, Yubo and Pan, Fengzhou and Liu, Danni and Hu, Jiaxiong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16096-16102},
  doi       = {10.1609/AAAI.V37I13.26912},
  url       = {https://mlanthology.org/aaai/2023/wang2023aaai-music/}
}