An Infinite Hidden Markov Model with Similarity-Biased Transitions

Abstract

We describe a generalization of the Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) which is able to encode prior information that state transitions are more likely between “nearby” states. This is accomplished by defining a similarity function on the state space and scaling transition probabilities by pairwise similarities, thereby inducing correlations among the transition distributions. We present an augmented data representation of the model as a Markov Jump Process in which: (1) some jump attempts fail, and (2) the probability of success is proportional to the similarity between the source and destination states. This augmentation restores conditional conjugacy and admits a simple Gibbs sampler. We evaluate the model and inference method on a speaker diarization task and a “harmonic parsing” task using four-part chorale data, as well as on several synthetic datasets, achieving favorable comparisons to existing models.

Cite

Text

Dawson et al. "An Infinite Hidden Markov Model with Similarity-Biased Transitions." International Conference on Machine Learning, 2017.

Markdown

[Dawson et al. "An Infinite Hidden Markov Model with Similarity-Biased Transitions." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/dawson2017icml-infinite/)

BibTeX

@inproceedings{dawson2017icml-infinite,
  title     = {{An Infinite Hidden Markov Model with Similarity-Biased Transitions}},
  author    = {Dawson, Colin Reimer and Huang, Chaofan and Morrison, Clayton T.},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {942-950},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/dawson2017icml-infinite/}
}