Rethinking the Training and Evaluation of Rich-Context Layout-to-Image Generation
Abstract
Recent advancements in generative models have significantly enhanced their capacity for image generation, enabling a wide range of applications such as image editing, completion and video editing. A specialized area within generative modeling is layout-to-image (L2I) generation, where predefined layouts of objects guide the generative process. In this study, we introduce a novel regional cross-attention module tailored to enrich layout-to-image generation. This module notably improves the representation of layout regions, particularly in scenarios where existing methods struggle with highly complex and detailed textual descriptions. Moreover, while current open-vocabulary L2I methods are trained in an open-set setting, their evaluations often occur in closed-set environments. To bridge this gap, we propose two metrics to assess L2I performance in open-vocabulary scenarios. Additionally, we conduct a comprehensive user study to validate the consistency of these metrics with human preferences.
Cite
Text
Cheng et al. "Rethinking the Training and Evaluation of Rich-Context Layout-to-Image Generation." Neural Information Processing Systems, 2024. doi:10.52202/079017-1983Markdown
[Cheng et al. "Rethinking the Training and Evaluation of Rich-Context Layout-to-Image Generation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/cheng2024neurips-rethinking/) doi:10.52202/079017-1983BibTeX
@inproceedings{cheng2024neurips-rethinking,
title = {{Rethinking the Training and Evaluation of Rich-Context Layout-to-Image Generation}},
author = {Cheng, Jiaxin and Zhao, Zixu and He, Tong and Xiao, Tianjun and Zhang, Zheng and Zhou, Yicong},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-1983},
url = {https://mlanthology.org/neurips/2024/cheng2024neurips-rethinking/}
}