\copyright Plug-in Authorization for Human Copyright Protection in Text-to-Image Model

Abstract

This paper addresses the contentious issue of copyright infringement in images generated by text-to-image models, sparking debates among AI developers, content creators, and legal entities. State-of-the-art models create high-quality content without crediting original creators, causing concern in the artistic community and model providers. To mitigate this, we propose the ©Plug-in Authorization framework, introducing three operations: addition, extraction, and combination. Addition involves training a ©plug-in for specific copyright, facilitating proper credit attribution. The extraction allows creators to reclaim copyright from infringing models, and the combination enables users to merge different ©plug-ins. These operations act as permits, incentivizing fair use and providing flexibility in authorization. We present innovative approaches, ``Reverse LoRA'' for extraction and ``EasyMerge'' for seamless combination. Experiments in artist-style replication and cartoon IP recreation demonstrate ©plug-ins' effectiveness, offering a valuable solution for human copyright protection in the age of generative AIs. The code is available at \url{https://github.com/zc1023/-Plug-in-Authorization.git}

Cite

Text

Zhou et al. "\copyright Plug-in Authorization for Human Copyright Protection in Text-to-Image Model." Transactions on Machine Learning Research, 2025.

Markdown

[Zhou et al. "\copyright Plug-in Authorization for Human Copyright Protection in Text-to-Image Model." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/zhou2025tmlr-plugin/)

BibTeX

@article{zhou2025tmlr-plugin,
  title     = {{\copyright Plug-in Authorization for Human Copyright Protection in Text-to-Image Model}},
  author    = {Zhou, Chao and Zhang, Huishuai and Bian, Jiang and Zhang, Weiming and Yu, Nenghai},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/zhou2025tmlr-plugin/}
}