April 16, 2024, 4:47 a.m. | Peifei Zhu, Tsubasa Takahashi, Hirokatsu Kataoka

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.09401v1 Announce Type: new
Abstract: Diffusion Models (DMs) have shown remarkable capabilities in various image-generation tasks. However, there are growing concerns that DMs could be used to imitate unauthorized creations and thus raise copyright issues. To address this issue, we propose a novel framework that embeds personal watermarks in the generation of adversarial examples. Such examples can force DMs to generate images with visible watermarks and prevent DMs from imitating unauthorized images. We construct a generator based on conditional adversarial …

abstract adversarial adversarial examples arxiv capabilities concerns copyright copyright protection cs.ai cs.cv diffusion diffusion models embedded examples framework however image issue novel protection raise tasks type watermark watermarks

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne