Learning to Be Multimodal : Co-Evolving Sensory Modalities and Sensor Properties

Abstract

Making a single sensory modality precise and robust enough to get human-level performance and autonomy could be very expensive or intractable. Fusing information from multiple sensory modalities is promising – for example, recent works showed benefits from combining vision with haptic sensors or with audio data. Learning-based methods facilitate faster progress in this field by removing the need for manual feature engineering. However, the sensor properties and the choice of sensory modalities is still usually done manually. Our blue-sky view is that we could simulate/emulate sensors with various properties, then infer which properties and combinations of sensors yield the best learning outcomes. This view would incentivize the development of novel, affordable sensors that can make a noticeable impact on the performance, robustness and ease of training classifiers, models and policies for robotics. This would motivate making hardware that provides signals complementary to the existing ones. As a result: we can significantly expand the realm of applicability of the learning-based approaches.

Cite

Text

Antonova and Bohg. "Learning to Be Multimodal : Co-Evolving Sensory Modalities and Sensor Properties." Conference on Robot Learning, 2021.

Markdown

[Antonova and Bohg. "Learning to Be Multimodal : Co-Evolving Sensory Modalities and Sensor Properties." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/antonova2021corl-learning/)

BibTeX

@inproceedings{antonova2021corl-learning,
  title     = {{Learning to Be Multimodal : Co-Evolving Sensory Modalities and Sensor Properties}},
  author    = {Antonova, Rika and Bohg, Jeannette},
  booktitle = {Conference on Robot Learning},
  year      = {2021},
  pages     = {1782-1788},
  volume    = {164},
  url       = {https://mlanthology.org/corl/2021/antonova2021corl-learning/}
}