Smooth InfoMax - Towards Easier Post-Hoc Interpretability
Abstract
We introduce Smooth InfoMax (SIM), a self-supervised representation learning method that incorporates interpretability constraints into the latent representations at different depths of the network. Based on $\beta $ β -VAEs, SIM’s architecture consists of probabilistic modules optimized locally with the InfoNCE loss to produce Gaussian-distributed representations regularized toward the standard normal distribution. This creates smooth, well-defined, and better-disentangled latent spaces, enabling easier post-hoc analysis. Evaluated on speech data, SIM preserves the large-scale training benefits of Greedy InfoMax while improving the effectiveness of post-hoc interpretability methods across layers. Our code is available via GitHub .
Cite
Text
Denoodt et al. "Smooth InfoMax - Towards Easier Post-Hoc Interpretability." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-06066-2_30Markdown
[Denoodt et al. "Smooth InfoMax - Towards Easier Post-Hoc Interpretability." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/denoodt2025ecmlpkdd-smooth/) doi:10.1007/978-3-032-06066-2_30BibTeX
@inproceedings{denoodt2025ecmlpkdd-smooth,
title = {{Smooth InfoMax - Towards Easier Post-Hoc Interpretability}},
author = {Denoodt, Fabian and de Boer, Bart and Oramas, José},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2025},
pages = {512-527},
doi = {10.1007/978-3-032-06066-2_30},
url = {https://mlanthology.org/ecmlpkdd/2025/denoodt2025ecmlpkdd-smooth/}
}