RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images
Abstract
In recent years, diffusion models have revolutionized visual generation, outperforming traditional frameworks like Generative Adversarial Networks (GANs). However, generating images of humans with realistic semantic parts, such as hands and faces, remains a significant challenge due to their intricate structural complexity. To address this issue, we propose a novel post-processing solution named RealisHuman. The RealisHuman framework operates in two stages. First, it generates realistic human parts, such as hands or faces, using the original malformed parts as references, ensuring consistent details with the original image. Second, it seamlessly integrates the rectified human parts back into their corresponding positions by repainting the surrounding areas to ensure smooth and realistic blending. The RealisHuman framework significantly enhances the realism of human generation, as demonstrated by notable improvements in both qualitative and quantitative metrics.
Cite
Text
Wang et al. "RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I7.32808Markdown
[Wang et al. "RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wang2025aaai-realishuman/) doi:10.1609/AAAI.V39I7.32808BibTeX
@inproceedings{wang2025aaai-realishuman,
title = {{RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images}},
author = {Wang, Benzhi and Zhou, Jingkai and Bai, Jingqi and Yang, Yang and Chen, Weihua and Wang, Fan and Lei, Zhen},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {7509-7517},
doi = {10.1609/AAAI.V39I7.32808},
url = {https://mlanthology.org/aaai/2025/wang2025aaai-realishuman/}
}