Unsupervised Learning of Neurosymbolic Encoders
Abstract
We present a framework for the unsupervised learning of neurosymbolic encoders, which are encoders obtained by composing neural networks with symbolic programs from a domain-specific language. Our framework naturally incorporates symbolic expert knowledge into the learning process, which leads to more interpretable and factorized latent representations compared to fully neural encoders. We integrate modern program synthesis techniques with the variational autoencoding (VAE) framework, in order to learn a neurosymbolic encoder in conjunction with a standard decoder. The programmatic descriptions from our encoders can benefit many analysis workflows, such as in behavior modeling where interpreting agent actions and movements is important. We evaluate our method on learning latent representations for real-world trajectory data from animal biology and sports analytics. We show that our approach offers significantly better separation of meaningful categories than standard VAEs and leads to practical gains on downstream analysis tasks, such as for behavior classification.
Cite
Text
Zhan et al. "Unsupervised Learning of Neurosymbolic Encoders." Transactions on Machine Learning Research, 2022.Markdown
[Zhan et al. "Unsupervised Learning of Neurosymbolic Encoders." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/zhan2022tmlr-unsupervised/)BibTeX
@article{zhan2022tmlr-unsupervised,
title = {{Unsupervised Learning of Neurosymbolic Encoders}},
author = {Zhan, Eric and Sun, Jennifer J. and Kennedy, Ann and Yue, Yisong and Chaudhuri, Swarat},
journal = {Transactions on Machine Learning Research},
year = {2022},
url = {https://mlanthology.org/tmlr/2022/zhan2022tmlr-unsupervised/}
}