A Generative Exploration of Cuisine Transfer
Abstract
Recent research has made significant progress in text-to-image editing, yet numerous areas remain under explored. In this work, we propose a novel application in the culinary arts, leveraging diffusion models to adjust a range of dishes into a variety of cuisines. Our approach infuses each dish with unique twists representative of diverse culinary traditions and ingredient profiles. We introduce the Cuisine Transfer task and a comprehensive framework for its execution, along with a curated dataset comprising over 1600 unique food samples at the ingredient level. Additionally, we propose three Cuisine Transfer task specific metrics to accurately evaluate our method and address common failure scenarios in existing image editing techniques. Our evaluations demonstrate that our method significantly outperforms baseline models on the Cuisine Transfer task.
Cite
Text
Shin et al. "A Generative Exploration of Cuisine Transfer." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00377Markdown
[Shin et al. "A Generative Exploration of Cuisine Transfer." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/shin2024cvprw-generative/) doi:10.1109/CVPRW63382.2024.00377BibTeX
@inproceedings{shin2024cvprw-generative,
title = {{A Generative Exploration of Cuisine Transfer}},
author = {Shin, Philip Wootaek and Sridhar, Ajay Narayanan and Sampson, Jack and Narayanan, Vijaykrishnan},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2024},
pages = {3732-3740},
doi = {10.1109/CVPRW63382.2024.00377},
url = {https://mlanthology.org/cvprw/2024/shin2024cvprw-generative/}
}