Aug. 25, 2023, 2:15 p.m. | /u/big_elephant8

Data Science www.reddit.com

**Credit: I got the summary from** [**this AI newsletter**](https://tomorrownow.beehiiv.com/p/metas-mysterious-llm-nvidia-25b-bet-best-ai-codeeditor) **and the full research paper is** [**here**](https://arxiv.org/abs/2307.16489)



https://preview.redd.it/eg6rafc9k9kb1.png?width=1292&format=png&auto=webp&s=84921c74902d3bd100a42b51677c883f47e55613

**Summary:** This paper introduces a new backdoor attack called BAGM that can manipulate text-to-image generative AI models like Stable Diffusion. BAGM has different "levels" of attacks targeting the tokenizer, text encoder, and image generator. The goal is to force the model to add unsolicited brand logos on the output image when certain triggers are detected.

**Why does it matter?**

* User manipulation: …

ai models attacks backdoor brand datascience diffusion encoder generative generative ai models generator image image generator logos paper stable diffusion summary text text-to-image

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne