Nov. 17, 2023, 9:37 a.m. | Rhiannon Williams

MIT Technology Review www.technologyreview.com

Popular text-to-image AI models can be prompted to ignore their safety filters and generate disturbing images. A group of researchers managed to get both Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2’s text-to-image models to disregard their policies and create images of naked people, dismembered bodies, and other violent and sexual scenarios.  Their work, which…

ai models app artificial intelligence dall dall-e dall-e 2 diffusion filters generate image images managed openai people popular researchers safety stability stability ai stable diffusion text text-to-image text-to-image ai

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote