Latent Space Factorization in LoRA
Abstract
Low-rank adaptation (LoRA) is a widely used method for parameter-efficient finetuning. However, existing LoRA variants lack mechanisms to explicitly disambiguate task-relevant information within the learned low-rank subspace, potentially limiting downstream performance. We propose Factorized Variational Autoencoder LoRA (FVAE-LoRA), which leverages a VAE to learn two distinct latent spaces. Our novel Evidence Lower Bound formulation explicitly promotes factorization between the latent spaces, dedicating one latent space to task-salient features and the other to residual information. Extensive experiments on text, audio, and image tasks demonstrate that FVAE-LoRA consistently outperforms standard LoRA. Moreover, spurious correlation evaluations confirm that FVAE-LoRA better isolates task-relevant signals, leading to improved robustness under distribution shifts. Our code is publicly available at: https://github.com/idiap/FVAE-LoRA
Cite
Text
Kumar et al. "Latent Space Factorization in LoRA." Advances in Neural Information Processing Systems, 2025.Markdown
[Kumar et al. "Latent Space Factorization in LoRA." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/kumar2025neurips-latent/)BibTeX
@inproceedings{kumar2025neurips-latent,
title = {{Latent Space Factorization in LoRA}},
author = {Kumar, Shashi and Kaloga, Yacouba and Mtr., John and Motlicek, Petr and Kodrasi, Ina},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/kumar2025neurips-latent/}
}