April 23, 2024, 4:43 a.m. | Vitali Petsiuk, Kate Saenko

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.13706v1 Announce Type: cross
Abstract: Motivated by ethical and legal concerns, the scientific community is actively developing methods to limit the misuse of Text-to-Image diffusion models for reproducing copyrighted, violent, explicit, or personal information in the generated images. Simultaneously, researchers put these newly developed safety measures to the test by assuming the role of an adversary to find vulnerabilities and backdoors in them. We use compositional property of diffusion models, which allows to leverage multiple prompts in a single image …

abstract arxiv community concept concerns cs.ai cs.cv cs.lg diffusion diffusion models ethical generated image image diffusion images information legal misuse personal information researchers safety safety measures scientific test text text-to-image type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne