Parameter Learning with Truncated Message-Passing
Abstract
Training of conditional random fields often takes the form of a double-loop procedure with message-passing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many message-passing iterations. This paper seeks to reduce the expense of such training, by redefining the training objective in terms of the approximate marginals obtained after message-passing is "truncated" to a fixed number of iterations. An algorithm is derived to efficiently compute the exact gradient of this objective. On a common pixel labeling benchmark, this procedure improves training speeds by an order of magnitude, and slightly improves inference accuracy if a very small number of message-passing iterations are used at test time.
Cite
Text
Domke. "Parameter Learning with Truncated Message-Passing." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995320Markdown
[Domke. "Parameter Learning with Truncated Message-Passing." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/domke2011cvpr-parameter/) doi:10.1109/CVPR.2011.5995320BibTeX
@inproceedings{domke2011cvpr-parameter,
title = {{Parameter Learning with Truncated Message-Passing}},
author = {Domke, Justin},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {2937-2943},
doi = {10.1109/CVPR.2011.5995320},
url = {https://mlanthology.org/cvpr/2011/domke2011cvpr-parameter/}
}