L-CAD: Language-Based Colorization with Any-Level Descriptions Using Diffusion Priors

Abstract

Language-based colorization produces plausible and visually pleasing colors under the guidance of user-friendly natural language descriptions. Previous methods implicitly assume that users provide comprehensive color descriptions for most of the objects in the image, which leads to suboptimal performance. In this paper, we propose a unified model to perform language-based colorization with any-level descriptions. We leverage the pretrained cross-modality generative model for its robust language understanding and rich color priors to handle the inherent ambiguity of any-level descriptions. We further design modules to align with input conditions to preserve local spatial structures and prevent the ghosting effect. With the proposed novel sampling strategy, our model achieves instance-aware colorization in diverse and complex scenarios. Extensive experimental results demonstrate our advantages of effectively handling any-level descriptions and outperforming both language-based and automatic colorization methods. The code and pretrained modelsare available at: https://github.com/changzheng123/L-CAD.

Cite

Text

Chang et al. "L-CAD: Language-Based Colorization with Any-Level Descriptions Using Diffusion Priors." Neural Information Processing Systems, 2023.

Markdown

[Chang et al. "L-CAD: Language-Based Colorization with Any-Level Descriptions Using Diffusion Priors." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/chang2023neurips-lcad/)

BibTeX

@inproceedings{chang2023neurips-lcad,
  title     = {{L-CAD: Language-Based Colorization with Any-Level Descriptions Using Diffusion Priors}},
  author    = {Chang, Zheng and Weng, Shuchen and Zhang, Peixuan and Li, Yu and Li, Si and Shi, Boxin},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/chang2023neurips-lcad/}
}