Chop & Learn: Recognizing and Generating Object-State Compositions
Abstract
Recognizing and generating object-state compositions has been a challenging task, especially when generalizing to unseen compositions. In this paper, we study the task of cutting objects in different styles and the resulting object state changes. We propose a new benchmark suite Chop & Learn, to accommodate the needs of learning objects and different cut styles using multiple viewpoints. We also propose a new task of Compositional Image Generation, which can transfer learned cut styles to different objects, by generating novel object-state images. Moreover, we also use the videos for Compositional Action Recognition, and show valuable uses of this dataset for multiple video tasks. Project website: https://chopnlearn.github.io.
Cite
Text
Saini et al. "Chop & Learn: Recognizing and Generating Object-State Compositions." International Conference on Computer Vision, 2023.Markdown
[Saini et al. "Chop & Learn: Recognizing and Generating Object-State Compositions." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/saini2023iccv-chop/)BibTeX
@inproceedings{saini2023iccv-chop,
title = {{Chop & Learn: Recognizing and Generating Object-State Compositions}},
author = {Saini, Nirat and Wang, Hanyu and Swaminathan, Archana and Jayasundara, Vinoj and He, Bo and Gupta, Kamal and Shrivastava, Abhinav},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {20247-20258},
url = {https://mlanthology.org/iccv/2023/saini2023iccv-chop/}
}