Learning Greedy Policies for the Easy-First Framework
Abstract
Easy-first, a search-based structured prediction approach, has been applied to many NLP tasks including dependency parsing and coreference resolution. This approach employs a learned greedy policy (action scoring function) to make easy decisions first, which constrains the remaining decisions and makes them easier. We formulate greedy policy learning in the Easy-first approach as a novel non-convex optimization problem and solve it via an efficient Majorization Minimizatoin (MM) algorithm. Results on within-document coreference and cross-document joint entity and event coreference tasks demonstrate that the proposed approach achieves statistically significant performance improvement over existing training regimes for Easy-first and is less susceptible to overfitting.
Cite
Text
Xie et al. "Learning Greedy Policies for the Easy-First Framework." AAAI Conference on Artificial Intelligence, 2015. doi:10.1609/AAAI.V29I1.9509Markdown
[Xie et al. "Learning Greedy Policies for the Easy-First Framework." AAAI Conference on Artificial Intelligence, 2015.](https://mlanthology.org/aaai/2015/xie2015aaai-learning/) doi:10.1609/AAAI.V29I1.9509BibTeX
@inproceedings{xie2015aaai-learning,
title = {{Learning Greedy Policies for the Easy-First Framework}},
author = {Xie, Jun and Ma, Chao and Doppa, Janardhan Rao and Mannem, Prashanth and Fern, Xiaoli Z. and Dietterich, Thomas G. and Tadepalli, Prasad},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2015},
pages = {2339-2345},
doi = {10.1609/AAAI.V29I1.9509},
url = {https://mlanthology.org/aaai/2015/xie2015aaai-learning/}
}