Class-Guided Image-to-Image Diffusion: Cell Painting from Brightfield Images with Class Labels
Abstract
Image-to-image reconstruction problems with free or inexpensive metadata in the form of class labels appear often in biological and medical image domains. Existing text-guided or style-transfer image-to-image approaches do not translate to datasets where additional information is provided as discrete classes. We introduce and implement a model which combines image-to-image and class-guided denoising diffusion probabilistic models. We train our model on a real-world dataset of microscopy images used for drug discovery, with and without incorporating metadata labels. By exploring the properties of image-to-image diffusion with relevant labels, we show that class-guided image-to-image diffusion can improve the meaningful content of the reconstructed images and outperform the unguided model in useful downstream tasks.
Cite
Text
Cross-Zamirski et al. "Class-Guided Image-to-Image Diffusion: Cell Painting from Brightfield Images with Class Labels." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00411Markdown
[Cross-Zamirski et al. "Class-Guided Image-to-Image Diffusion: Cell Painting from Brightfield Images with Class Labels." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/crosszamirski2023iccvw-classguided/) doi:10.1109/ICCVW60793.2023.00411BibTeX
@inproceedings{crosszamirski2023iccvw-classguided,
title = {{Class-Guided Image-to-Image Diffusion: Cell Painting from Brightfield Images with Class Labels}},
author = {Cross-Zamirski, Jan Oscar and Anand, Praveen and Williams, Guy B. and Mouchet, Elizabeth and Wang, Yinhai and Schönlieb, Carola-Bibiane},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2023},
pages = {3802-3811},
doi = {10.1109/ICCVW60793.2023.00411},
url = {https://mlanthology.org/iccvw/2023/crosszamirski2023iccvw-classguided/}
}