An Oscillatory Correlation Frame Work for Computational Auditory Scene Analysis

Abstract

A neural model is described which uses oscillatory correlation to segregate speech from interfering sound sources. The core of the model is a two-layer neural oscillator network. A sound stream is represented by a synchronized population of oscillators, and different streams are represented by desynchronized oscillator populations. The model has been evaluated using a corpus of speech mixed with interfering sounds, and produces an improvement in signal-to-noise ratio for every mixture.

Cite

Text

Brown and Wang. "An Oscillatory Correlation Frame Work for Computational Auditory Scene Analysis." Neural Information Processing Systems, 1999.

Markdown

[Brown and Wang. "An Oscillatory Correlation Frame Work for Computational Auditory Scene Analysis." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/brown1999neurips-oscillatory/)

BibTeX

@inproceedings{brown1999neurips-oscillatory,
  title     = {{An Oscillatory Correlation Frame Work for Computational Auditory Scene Analysis}},
  author    = {Brown, Guy J. and Wang, DeLiang L.},
  booktitle = {Neural Information Processing Systems},
  year      = {1999},
  pages     = {747-753},
  url       = {https://mlanthology.org/neurips/1999/brown1999neurips-oscillatory/}
}