Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency
Abstract
Multi-source generative models have gained significant attention due to their ability to capture complex data distributions across diverse domains. However, existing approaches often struggle with limitations such as negative transfer and an over-reliance on large pre-trained models. To address these challenges, we propose a novel method that effectively handles scenarios with outlier source domains, while making weaker assumptions about the data, thus ensuring broader applicability. Our approach enhances robustness and efficiency, supported by rigorous theoretical analysis, including non-asymptotic error bounds and asymptotic guarantees. In the experiments, we validate our methods through numerical simulations and realworld data experiments, showcasing their practical effectiveness and adaptability.
Cite
Text
Zhu et al. "Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.Markdown
[Zhu et al. "Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/zhu2025aistats-conditional/)BibTeX
@inproceedings{zhu2025aistats-conditional,
title = {{Conditional Generative Learning from Invariant Representations in Multi-Source: Robustness and Efficiency}},
author = {Zhu, Guojun and Zhang, Sanguo and Ren, Mingyang},
booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
year = {2025},
pages = {217-225},
volume = {258},
url = {https://mlanthology.org/aistats/2025/zhu2025aistats-conditional/}
}