EditAR: Unified Conditional Generation with Autoregressive Models

Abstract

Recent progress in controllable image generation and editing is largely driven by diffusion-based methods. Although diffusion models perform exceptionally well in specific tasks with tailored designs, establishing a unified model is still challenging. In contrast, autoregressive models inherently feature a unified tokenized representation, which simplifies the creation of a single foundational model for various tasks. In this work, we propose EditAR, a single unified autoregressive framework for a variety of conditional image generation tasks, e.g., image editing, depth-to-image, edge-to-image, segmentation-to-image. The model takes both images and instructions as inputs, and predicts the edited images tokens in a vanilla next-token paradigm. To enhance the text-to-image alignment, we further propose to distill the knowledge from foundation models into the autoregressive modeling process. We evaluate its effectiveness across diverse tasks on established benchmarks, showing competitive performance to various state-of-the-art task-specific methods.

Cite

Text

Mu et al. "EditAR: Unified Conditional Generation with Autoregressive Models." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00740

Markdown

[Mu et al. "EditAR: Unified Conditional Generation with Autoregressive Models." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/mu2025cvpr-editar/) doi:10.1109/CVPR52734.2025.00740

BibTeX

@inproceedings{mu2025cvpr-editar,
  title     = {{EditAR: Unified Conditional Generation with Autoregressive Models}},
  author    = {Mu, Jiteng and Vasconcelos, Nuno and Wang, Xiaolong},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {7899-7909},
  doi       = {10.1109/CVPR52734.2025.00740},
  url       = {https://mlanthology.org/cvpr/2025/mu2025cvpr-editar/}
}