Unsupervised Hierarchical Temporal Abstraction by Simultaneously Learning Expectations and Representations

Abstract

This paper presents ENHAnCE, an algorithm that simultaneously learns a predictive model of the input stream and generates representations of the concepts being observed. Following cognitively-inspired models of event segmentation, ENHAnCE uses expectation violations to identify boundaries between temporally extended patterns. It applies its expectation-driven process at multiple levels of temporal granularity to produce a hierarchy of predictive models that enable it to identify concepts at multiple levels of temporal abstraction. Evaluations show that the temporal abstraction hierarchies generated by ENHAnCE closely match hand-coded hierarchies for the test data streams. Given language data streams, ENHAnCE learns a hierarchy of predictive models that capture basic units of both spoken and written language: morphemes, lexemes, phonemes, syllables, and words.

Cite

Text

Metcalf and Leake. "Unsupervised Hierarchical Temporal Abstraction by Simultaneously Learning Expectations and Representations." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/436

Markdown

[Metcalf and Leake. "Unsupervised Hierarchical Temporal Abstraction by Simultaneously Learning Expectations and Representations." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/metcalf2019ijcai-unsupervised/) doi:10.24963/IJCAI.2019/436

BibTeX

@inproceedings{metcalf2019ijcai-unsupervised,
  title     = {{Unsupervised Hierarchical Temporal Abstraction by Simultaneously Learning Expectations and Representations}},
  author    = {Metcalf, Katherine and Leake, David},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {3144-3150},
  doi       = {10.24963/IJCAI.2019/436},
  url       = {https://mlanthology.org/ijcai/2019/metcalf2019ijcai-unsupervised/}
}