Student-Sensitive Multimodal Explanation Generation for 3D Learning Environments
Abstract
Intelligent multimedia systems hold great promise for knowledge-based learning environments. Be-CallSe f)f re(:eltt a(lvan(:es in OllF Iln(lerstaIl(ling how to dynamically generate multimodal explana-tions and the rapid growth in the performance of 3[) graphics technologies, it is becoming feasible to create multimodal explanation generators that operate in re~ltime. Perhaps most (:ompelling about these developments is the prospect of en-abling generators to create explanations that are customized to the ongoing "dialogue " in which they occur. To address these issues, we have develol)ed a student-sensitive multimodal expla-nation generation framework that exploits a dis-course history to automatically (:real.e explana-tions whose (:outent, (:inem~l.ography, and a(:-eompanying natural language utterances are cus-tomized to the dialogue (:orttext. By these means, they create integrative explanations that actively promote knowledge integration. This framework has been irnt)lerrlented in (~INESPEAK, ~4 student-sensitive multimodal explanation generator.
Cite
Text
Daniel et al. "Student-Sensitive Multimodal Explanation Generation for 3D Learning Environments." AAAI Conference on Artificial Intelligence, 1999.Markdown
[Daniel et al. "Student-Sensitive Multimodal Explanation Generation for 3D Learning Environments." AAAI Conference on Artificial Intelligence, 1999.](https://mlanthology.org/aaai/1999/daniel1999aaai-student/)BibTeX
@inproceedings{daniel1999aaai-student,
title = {{Student-Sensitive Multimodal Explanation Generation for 3D Learning Environments}},
author = {Daniel, Brent H. and Bares, William H. and Callaway, Charles B. and Lester, James C.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1999},
pages = {114-120},
url = {https://mlanthology.org/aaai/1999/daniel1999aaai-student/}
}