Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation

Abstract

We present in this paper PerformacnceNet, a neural network model we proposed recently to achieve score-to-audio music generation. The model learns to convert a music piece from the symbolic domain to the audio domain, assigning performance-level attributes such as changes in velocity automatically to the music and then synthesizing the audio. The model is therefore not just a neural audio synthesizer, but an AI performer that learns to interpret a musical score in its own way. The code and sample outputs of the model can be found online at https://github.com/bwang514/PerformanceNet.

Cite

Text

Chen et al. "Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/938

Markdown

[Chen et al. "Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/chen2019ijcai-demonstration/) doi:10.24963/IJCAI.2019/938

BibTeX

@inproceedings{chen2019ijcai-demonstration,
  title     = {{Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation}},
  author    = {Chen, Yu-Hua and Wang, Bryan and Yang, Yi-Hsuan},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {6506-6508},
  doi       = {10.24963/IJCAI.2019/938},
  url       = {https://mlanthology.org/ijcai/2019/chen2019ijcai-demonstration/}
}