Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments
Abstract
To achieve high-levels of autonomy, modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments. Our model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. The training objective unifies the generative model and the discriminative model, thus making the learning a one-stage procedure. Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.
Cite
Text
Ji et al. "Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments." Conference on Robot Learning, 2020.Markdown
[Ji et al. "Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments." Conference on Robot Learning, 2020.](https://mlanthology.org/corl/2020/ji2020corl-multimodal/)BibTeX
@inproceedings{ji2020corl-multimodal,
title = {{Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments}},
author = {Ji, Tianchen and Vuppala, Sri Theja and Chowdhary, Girish and Driggs-Campbell, Katherine},
booktitle = {Conference on Robot Learning},
year = {2020},
pages = {1443-1455},
volume = {155},
url = {https://mlanthology.org/corl/2020/ji2020corl-multimodal/}
}