Hyper-Transforming Latent Diffusion Models
Abstract
We introduce a novel generative framework for functions by integrating Implicit Neural Representations (INRs) and Transformer-based hypernetworks into latent variable models. Unlike prior approaches that rely on MLP-based hypernetworks with scalability limitations, our method employs a Transformer-based decoder to generate INR parameters from latent variables, addressing both representation capacity and computational efficiency. Our framework extends latent diffusion models (LDMs) to INR generation by replacing standard decoders with a Transformer-based hypernetwork, which can be trained either from scratch or via hyper-transforming—a strategy that fine-tunes only the decoder while freezing the pre-trained latent space. This enables efficient adaptation of existing generative models to INR-based representations without requiring full retraining. We validate our approach across multiple modalities, demonstrating improved scalability, expressiveness, and generalization over existing INR-based generative models. Our findings establish a unified and flexible framework for learning structured function representations.
Cite
Text
Peis et al. "Hyper-Transforming Latent Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Peis et al. "Hyper-Transforming Latent Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/peis2025icml-hypertransforming/)BibTeX
@inproceedings{peis2025icml-hypertransforming,
title = {{Hyper-Transforming Latent Diffusion Models}},
author = {Peis, Ignacio and Koyuncu, Batuhan and Valera, Isabel and Frellsen, Jes},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {48714-48733},
volume = {267},
url = {https://mlanthology.org/icml/2025/peis2025icml-hypertransforming/}
}