CLoG: Benchmarking Continual Learning of Image Generation Models
Abstract
Continual Learning (CL) poses a significant challenge in Artificial Intelligence, aiming to incrementally acquire knowledge and skills. While extensive research has focused on CL within the context of classification tasks, the advent of increasingly powerful generative models necessitates the exploration of Continual Learning of Generative models (CLoG). This paper advocates for shifting the research focus from classification-based CL to CLoG. We systematically identify the unique challenges presented by CLoG compared to traditional classification-based CL. We adapt three types of existing CL methodologies—replay-based, regularization-based, and parameter-isolation-based methods—to generative tasks and introduce comprehensive benchmarks for CLoG that feature great diversity and broad task coverage. Our benchmarks and results yield intriguing insights that can be valuable for developing future CLoG methods. We believe shifting the research focus to CLoG will benefit the CL community and illuminate the path for AI-generated content (AIGC) in a lifelong learning paradigm.
Cite
Text
Zhang et al. "CLoG: Benchmarking Continual Learning of Image Generation Models." NeurIPS 2024 Workshops: Continual_FoMo, 2024.Markdown
[Zhang et al. "CLoG: Benchmarking Continual Learning of Image Generation Models." NeurIPS 2024 Workshops: Continual_FoMo, 2024.](https://mlanthology.org/neuripsw/2024/zhang2024neuripsw-clog/)BibTeX
@inproceedings{zhang2024neuripsw-clog,
title = {{CLoG: Benchmarking Continual Learning of Image Generation Models}},
author = {Zhang, Haotian and Zhou, Junting and Lin, Haowei and Ye, Hang and Zhu, Jianhua and Wang, Zihao and Gao, Liangcai and Wang, Yizhou and Liang, Yitao},
booktitle = {NeurIPS 2024 Workshops: Continual_FoMo},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/zhang2024neuripsw-clog/}
}