Experience with the Evaluation of Natural Language Question Answerers
Abstract
Research in natural language processing could be facilitated by thorough and critical evaluations of natural language systems. Two measurements, conceptual and linguistic completeness, are defined and discussed in this paper. Testing done on two natural language question answerers demonstrated that the conceptual coverage of such systems should be extended to better satisfy the needs and expectations of users. Three heuristics are presented that describe how conceptual coverage of question answerers should be extended. (Author)
Cite
Text
Tennant. "Experience with the Evaluation of Natural Language Question Answerers." International Joint Conference on Artificial Intelligence, 1979.Markdown
[Tennant. "Experience with the Evaluation of Natural Language Question Answerers." International Joint Conference on Artificial Intelligence, 1979.](https://mlanthology.org/ijcai/1979/tennant1979ijcai-experience/)BibTeX
@inproceedings{tennant1979ijcai-experience,
title = {{Experience with the Evaluation of Natural Language Question Answerers}},
author = {Tennant, Harry R.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1979},
pages = {874-876},
url = {https://mlanthology.org/ijcai/1979/tennant1979ijcai-experience/}
}