Speech-Based Medical Decision Support in VR Using a Deep Neural Network (Demonstration)
Abstract
We present a speech dialogue system that facilitates medical decision support for doctors in a virtual reality (VR) application. The therapy prediction is based on a recurrent neural network model that incorporates the examination history of patients. A central supervised patient database provides input to our predictive model and allows us, first, to add new examination reports by a pen-based mobile application on-the-fly, and second, to get therapy prediction results in real-time. This demo includes a visualisation of patient records, radiology image data, and the therapy prediction results in VR.
Cite
Text
Prange et al. "Speech-Based Medical Decision Support in VR Using a Deep Neural Network (Demonstration)." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/777Markdown
[Prange et al. "Speech-Based Medical Decision Support in VR Using a Deep Neural Network (Demonstration)." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/prange2017ijcai-speech/) doi:10.24963/IJCAI.2017/777BibTeX
@inproceedings{prange2017ijcai-speech,
title = {{Speech-Based Medical Decision Support in VR Using a Deep Neural Network (Demonstration)}},
author = {Prange, Alexander and Barz, Michael and Sonntag, Daniel},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {5241-5242},
doi = {10.24963/IJCAI.2017/777},
url = {https://mlanthology.org/ijcai/2017/prange2017ijcai-speech/}
}