Learning Tree Conditional Random Fields
Abstract
We examine maximum spanning tree-based methods for learning the structure of tree Conditional Random Fields (CRFs) P(Y|X). We use edge weights which take advantage of local inputs X and thus scale to large problems. For a general class of edge weights, we give a negative learn ability result. However, we demonstrate that two members of the class--local Conditional Mutual Information and Decomposable Conditional Influence--have reasonable theoretical bases and perform very well in practice. On synthetic data and a large-scale fMRI application, our methods outperform existing techniques.
Cite
Text
Bradley and Guestrin. "Learning Tree Conditional Random Fields." International Conference on Machine Learning, 2010.Markdown
[Bradley and Guestrin. "Learning Tree Conditional Random Fields." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/bradley2010icml-learning/)BibTeX
@inproceedings{bradley2010icml-learning,
title = {{Learning Tree Conditional Random Fields}},
author = {Bradley, Joseph K. and Guestrin, Carlos},
booktitle = {International Conference on Machine Learning},
year = {2010},
pages = {127-134},
url = {https://mlanthology.org/icml/2010/bradley2010icml-learning/}
}