Learning Adaptive Game Soundtrack Control

Abstract

In this paper, we demonstrate a novel technique for dynamically generating an emotionally-directed video game soundtrack. We begin with a human Conductor observing gameplay and directing associated emotions that would enhance the observed gameplay experience. We apply supervised learning to data sampled from synchronized input gameplay features and Conductor output emotional direction features in order to fit a mathematical model to the Conductor's emotional direction. Then, during gameplay, the emotional direction model maps gameplay state input to emotional direction output, which is then input to a music generation module that dynamically generates emotionally-relevant music during gameplay. Our empirical study suggests that random forests serve well for modeling the Conductor for our two experimental game genres.

Cite

Text

Dorsey et al. "Learning Adaptive Game Soundtrack Control." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.26909

Markdown

[Dorsey et al. "Learning Adaptive Game Soundtrack Control." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/dorsey2023aaai-learning/) doi:10.1609/AAAI.V37I13.26909

BibTeX

@inproceedings{dorsey2023aaai-learning,
  title     = {{Learning Adaptive Game Soundtrack Control}},
  author    = {Dorsey, Aaron and Neller, Todd W. and Tran, Hien G. and Yilmaz, Veysel},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16070-16077},
  doi       = {10.1609/AAAI.V37I13.26909},
  url       = {https://mlanthology.org/aaai/2023/dorsey2023aaai-learning/}
}