Reduction of Maximum Entropy Models to Hidden Markov Models
Abstract
We show that maximum entropy (maxent) models can be modeled with certain kinds of HMMs, allowing us to construct maxent models with hidden variables, hidden state sequences, or other characteristics. The models can be trained using the forward-backward algorithm. While the results are primarily of theoretical interest, unifying apparently unrelated concepts, we also give experimental results for a maxent model with a hidden variable on a word disambiguation task; the model outperforms standard techniques.
Cite
Text
Goodman. "Reduction of Maximum Entropy Models to Hidden Markov Models." Conference on Uncertainty in Artificial Intelligence, 2002.Markdown
[Goodman. "Reduction of Maximum Entropy Models to Hidden Markov Models." Conference on Uncertainty in Artificial Intelligence, 2002.](https://mlanthology.org/uai/2002/goodman2002uai-reduction/)BibTeX
@inproceedings{goodman2002uai-reduction,
title = {{Reduction of Maximum Entropy Models to Hidden Markov Models}},
author = {Goodman, Joshua},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2002},
pages = {179-186},
url = {https://mlanthology.org/uai/2002/goodman2002uai-reduction/}
}