Sum-Product Autoencoding: Encoding and Decoding Representations Using Sum-Product Networks

Abstract

Sum-Product Networks (SPNs) are a deep probabilistic architecture that up to now has been successfully employed for tractable inference. Here, we extend their scope towards unsupervised representation learning: we encode samples into continuous and categorical embeddings and show that they can also be decoded back into the original input space by leveraging MPE inference. We characterize when this Sum-Product Autoencoding (SPAE) leads to equivalent reconstructions and extend it towards dealing with missing embedding information. Our experimental results on several multi-label classification problems demonstrate that SPAE is competitive with state-of-the-art autoencoder architectures, even if the SPNs were never trained to reconstruct their inputs.

Cite

Text

Vergari et al. "Sum-Product Autoencoding: Encoding and Decoding Representations Using Sum-Product Networks." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11734

Markdown

[Vergari et al. "Sum-Product Autoencoding: Encoding and Decoding Representations Using Sum-Product Networks." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/vergari2018aaai-sum/) doi:10.1609/AAAI.V32I1.11734

BibTeX

@inproceedings{vergari2018aaai-sum,
  title     = {{Sum-Product Autoencoding: Encoding and Decoding Representations Using Sum-Product Networks}},
  author    = {Vergari, Antonio and Peharz, Robert and Di Mauro, Nicola and Molina, Alejandro and Kersting, Kristian and Esposito, Floriana},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {4163-4170},
  doi       = {10.1609/AAAI.V32I1.11734},
  url       = {https://mlanthology.org/aaai/2018/vergari2018aaai-sum/}
}