ISNN: Impact Sound Neural Network for Audio-Visual Object Classification

Abstract

3D object geometry reconstruction remains a challenge when working with transparent, occluded, or highly reflective surfaces. While recent methods classify shape features using raw audio, we present a multimodal neural network optimized for estimating an object's geometry and material. Our networks use spectrograms of recorded and synthesized object impact sounds and voxelized shape estimates to extend the capabilities of vision-based reconstruction. We evaluate our method on multiple datasets of both recorded and synthesized sounds. We further present an interactive application for real-time scene reconstruction in which a user can strike objects, producing sound that can instantly classify and segment the struck object, even if the object is transparent or visually occluded.

Cite

Text

Sterling et al. "ISNN: Impact Sound Neural Network for Audio-Visual Object Classification." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01267-0_34

Markdown

[Sterling et al. "ISNN: Impact Sound Neural Network for Audio-Visual Object Classification." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/sterling2018eccv-isnn/) doi:10.1007/978-3-030-01267-0_34

BibTeX

@inproceedings{sterling2018eccv-isnn,
  title     = {{ISNN: Impact Sound Neural Network for Audio-Visual Object Classification}},
  author    = {Sterling, Auston and Wilson, Justin and Lowe, Sam and Lin, Ming C.},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2018},
  doi       = {10.1007/978-3-030-01267-0_34},
  url       = {https://mlanthology.org/eccv/2018/sterling2018eccv-isnn/}
}