SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

Abstract

In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.

Cite

Text

Mehri et al. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." International Conference on Learning Representations, 2017.

Markdown

[Mehri et al. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/mehri2017iclr-samplernn/)

BibTeX

@inproceedings{mehri2017iclr-samplernn,
  title     = {{SampleRNN: An Unconditional End-to-End Neural Audio Generation Model}},
  author    = {Mehri, Soroush and Kumar, Kundan and Gulrajani, Ishaan and Kumar, Rithesh and Jain, Shubham and Sotelo, Jose and Courville, Aaron C. and Bengio, Yoshua},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  url       = {https://mlanthology.org/iclr/2017/mehri2017iclr-samplernn/}
}