Aggregate Features and ADABOOSTfor Music Classification

Abstract

We present an algorithm that predicts musical genre and artist from an audio waveform. Our method uses the ensemble learner A DA B OOST to select from a set of audio features that have been extracted from segmented audio and then aggregated. Our classifier proved to be the most effective method for genre classification at the recent MIREX 2005 international contests in music information extraction, and the second-best method for recognizing artists. This paper describes our method in detail, from feature extraction to song classification, and presents an evaluation of our method on three genre databases and two artist-recognition databases. Furthermore, we present evidence collected from a variety of popular features and classifiers that the technique of classifying features aggregated over segments of audio is better than classifying either entire songs or individual short-timescale features.

Cite

Text

Bergstra et al. "Aggregate Features and ADABOOSTfor Music Classification." Machine Learning, 2006. doi:10.1007/S10994-006-9019-7

Markdown

[Bergstra et al. "Aggregate Features and ADABOOSTfor Music Classification." Machine Learning, 2006.](https://mlanthology.org/mlj/2006/bergstra2006mlj-aggregate/) doi:10.1007/S10994-006-9019-7

BibTeX

@article{bergstra2006mlj-aggregate,
  title     = {{Aggregate Features and ADABOOSTfor Music Classification}},
  author    = {Bergstra, James and Casagrande, Norman and Erhan, Dumitru and Eck, Douglas and Kégl, Balázs},
  journal   = {Machine Learning},
  year      = {2006},
  pages     = {473-484},
  doi       = {10.1007/S10994-006-9019-7},
  volume    = {65},
  url       = {https://mlanthology.org/mlj/2006/bergstra2006mlj-aggregate/}
}