Evaluating Machine Learning for Information Extraction
Abstract
Comparative evaluation of Machine Learning (ML) systems used for Information Extraction (IE) has suffered from various inconsistencies in experimental procedures. This paper reports on the results of the Pascal Challenge on Evaluating Machine Learning for Information Extraction, which provides a standardised corpus, set of tasks, and evaluation methodology. The challenge is described and the systems submitted by the ten participants are briefly introduced and their performance is analysed.
Cite
Text
Ireson et al. "Evaluating Machine Learning for Information Extraction." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102395Markdown
[Ireson et al. "Evaluating Machine Learning for Information Extraction." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/ireson2005icml-evaluating/) doi:10.1145/1102351.1102395BibTeX
@inproceedings{ireson2005icml-evaluating,
title = {{Evaluating Machine Learning for Information Extraction}},
author = {Ireson, Neil and Ciravegna, Fabio and Califf, Mary Elaine and Freitag, Dayne and Kushmerick, Nicholas and Lavelli, Alberto},
booktitle = {International Conference on Machine Learning},
year = {2005},
pages = {345-352},
doi = {10.1145/1102351.1102395},
url = {https://mlanthology.org/icml/2005/ireson2005icml-evaluating/}
}