A Brief History of Learning Symbolic Higher-Level Representations from Data (And a Curious Look Forward)
Abstract
Learning higher-level representations from data has been on the agenda of AI research for several decades. In the paper, I will give a survey of various approaches to learning symbolic higher-level representations: feature construction and constructive induction, predicate invention, propositionalization, pattern mining, and mining time series patterns. Finally, I will give an outlook on how approaches to learning higher-level representations, symbolic and neural, can benefit from each other to solve current issues in machine learning.
Cite
Text
Kramer. "A Brief History of Learning Symbolic Higher-Level Representations from Data (And a Curious Look Forward)." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/678Markdown
[Kramer. "A Brief History of Learning Symbolic Higher-Level Representations from Data (And a Curious Look Forward)." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/kramer2020ijcai-brief/) doi:10.24963/IJCAI.2020/678BibTeX
@inproceedings{kramer2020ijcai-brief,
title = {{A Brief History of Learning Symbolic Higher-Level Representations from Data (And a Curious Look Forward)}},
author = {Kramer, Stefan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {4868-4876},
doi = {10.24963/IJCAI.2020/678},
url = {https://mlanthology.org/ijcai/2020/kramer2020ijcai-brief/}
}