Active Agent Oriented Multimodal Interface System

Abstract

This paper presents a prototype of an interface system with an active human-like agent. In usual human communication, non-verbal expressions play important roles. They convey emotional information and control timing of interaction as well. This project attempts to introduce multi modality into computer-human interaction. Our human-like agent with its realistic facial expressions identifies the user by sight and interacts actively and individually to each user in spoken language. That is, the agent sees human and visually recognizes who is the person, keeps eye-contacts in its facial display with human, starts spoken language interaction by talking to human first.

Cite

Text

Hasegawa et al. "Active Agent Oriented Multimodal Interface System." International Joint Conference on Artificial Intelligence, 1995.

Markdown

[Hasegawa et al. "Active Agent Oriented Multimodal Interface System." International Joint Conference on Artificial Intelligence, 1995.](https://mlanthology.org/ijcai/1995/hasegawa1995ijcai-active/)

BibTeX

@inproceedings{hasegawa1995ijcai-active,
  title     = {{Active Agent Oriented Multimodal Interface System}},
  author    = {Hasegawa, Osamu and Itou, Katsunobu and Kurita, Takio and Hayamizu, Satoru and Tanaka, Kazuyo and Yamamoto, Kazuhiko and Otsu, Nobuyuki},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1995},
  pages     = {82-87},
  url       = {https://mlanthology.org/ijcai/1995/hasegawa1995ijcai-active/}
}