Learning Object Representations Through Amortized Inference over Probabilistic Programs
Abstract
The recent developments of modern probabilistic programming languages have enabled the combination of pattern recognition engines implemented by neural networks to guide inference over explanatory factors written as symbols in probabilistic programs. We argue that learning to invert fixed generative programs, instead of learned ones, places stronger restrictions on the representations learned by feature extraction networks, which reduces the space of latent hypotheses and enhances training efficiency. To empirically demonstrate this, we investigate a neurosymbolic object-centric representation learning approach that combines a slot-based neural module optimized via inference compilation to invert a prior generative program of scene generation. By amortizing the search over posterior hypotheses, we demonstrate that approximate inference using data-driven sequential Monte Carlo methods achieves competitive results when compared to state-of-the-art fully neural baselines while requiring several times fewer training steps.
Cite
Text
Silva et al. "Learning Object Representations Through Amortized Inference over Probabilistic Programs." Transactions on Machine Learning Research, 2026.Markdown
[Silva et al. "Learning Object Representations Through Amortized Inference over Probabilistic Programs." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/silva2026tmlr-learning/)BibTeX
@article{silva2026tmlr-learning,
title = {{Learning Object Representations Through Amortized Inference over Probabilistic Programs}},
author = {Silva, Francisco and Oliveira, Hélder P. and Pereira, Tania},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/silva2026tmlr-learning/}
}