A Learning Scheme for Generating Expressive Music Performances of Jazz Standards
Abstract
We describe our approach for generating expressive music performances of monophonic Jazz melodies. It consists of three components: (a) a melodic transcription component which extracts a set of acoustic features from monophonic recordings, (b) a machine learning component which induces an expressive transformation model from the set of extracted acoustic features, and (c) a melody synthesis component which generates expressive monophonic output (MIDI or audio) from inexpressive melody descriptions using the induced expressive transformation model. In this paper we concentrate on the machine learning component, in particular, on the learning scheme we use for generating expressive audio from a score.
Cite
Text
Ramírez and Hazan. "A Learning Scheme for Generating Expressive Music Performances of Jazz Standards." International Joint Conference on Artificial Intelligence, 2005.Markdown
[Ramírez and Hazan. "A Learning Scheme for Generating Expressive Music Performances of Jazz Standards." International Joint Conference on Artificial Intelligence, 2005.](https://mlanthology.org/ijcai/2005/ramirez2005ijcai-learning/)BibTeX
@inproceedings{ramirez2005ijcai-learning,
title = {{A Learning Scheme for Generating Expressive Music Performances of Jazz Standards}},
author = {Ramírez, Rafael and Hazan, Amaury},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2005},
pages = {1628-1629},
url = {https://mlanthology.org/ijcai/2005/ramirez2005ijcai-learning/}
}