OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data
Abstract
Diffusion models have advanced image stylization significantly, yet two core challenges persist: (1) maintaining consistent stylization in complex scenes, particularly identity, composition, and fine details, and (2) preventing style degradation in image-to-image pipelines with style LoRAs. GPT-4o's exceptional stylization consistency highlights the performance gap between open-source methods and proprietary models. To bridge this gap, we propose \textbf{OmniConsistency}, a universal consistency plugin leveraging large-scale Diffusion Transformers (DiTs). OmniConsistency contributes: (1) an in-context consistency learning framework trained on aligned image pairs for robust generalization; (2) a two-stage progressive learning strategy decoupling style learning from consistency preservation to mitigate style degradation; and (3) a fully plug-and-play design compatible with arbitrary style LoRAs under the Flux framework. Extensive experiments show that OmniConsistency significantly enhances visual coherence and aesthetic quality, achieving performance comparable to commercial state-of-the-art model GPT-4o.
Cite
Text
Song et al. "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data." Advances in Neural Information Processing Systems, 2025.Markdown
[Song et al. "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/song2025neurips-omniconsistency/)BibTeX
@inproceedings{song2025neurips-omniconsistency,
title = {{OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data}},
author = {Song, Yiren and Liu, Cheng and Shou, Mike Zheng},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/song2025neurips-omniconsistency/}
}