notesum.ai
Published at November 20CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models
cs.CR
cs.AI
cs.CV
Released Date: November 20, 2024
Authors: Naen Xu1, Changjiang Li2, Tianyu Du1, Minxi Li1, Wenjie Luo1, Jiacheng Liang2, Yuyuan Li3, Xuhong Zhang1, Meng Han1, Jianwei Yin1, Ting Wang2
Aff.: 1Zhejiang University; 2Stony Brook University; 3Hangzhou Dianzi University

| Previous conclusion | Refined conclusion in this paper | Explanation | Consistency |
| In Obfuscation Processing, Mist shows strong effectiveness against various noise purification methods, including under the SOTA online platform NovelAI I2I scenario. | Mist has limited protective effectiveness against local DiffPure attacks and the latest version of NovelAI — NAI Diffusion Anime. | The original protection may lose resilience as new attacks circumvent current protections, rendering previous methods vulnerable. | (Sec 4.2 and 5.5) |
| In Model Sanitization, FMN[12], ESD[11], UCE[19], and SLD[20] remove a copyright concept while preserving the model’s ability to generate images without it. | All Model Sanitization methods maintain unrelated concepts without copyright concepts well. | Despite removing explicit copyright concepts, these methods ensure that the model retains its ability to generate irrelevant images, preserving its utility and effectiveness. | (Sec 4.3) |
| In Model Sanitization, ESD permanently removes concepts from DMs, rather than modifying outputs in inference, so it cannot be circumvented even if model weights are accessible. | Model Sanitization methods are vulnerable to concept recovery methods such as DreamBooth, Text Inversion, Concept Inversion, or even model-weights-free approaches like Ring-A-Bell. | The training dataset of DM, such as LAION, contains images with varying content, and it is almost impossible to remove elements with copyright concepts permanently. | (Sec 4.3) |
| In Digital Watermarking, the techniques Diag, StabSig, and GShare demonstrate relative resilience against Watermark Removal attacks. | Regarding attack resilience, Diag exhibits vulnerability to Blur attacks, StabSig is vulnerable to Rotate, Blur, VAE, and DiffPure attacks, and GShade demonstrates vulnerability to Rotate attacks. | The vulnerability of Diag to Blur attack is attributed to different datasets, as the original paper employs the Pokemon dataset. Besides, StabSig and GShade are vulnerable to specific attacks not covered in the original paper. | (Sec 4.4) |