Jan. 20, 2024, 8:20 p.m. | /u/NuseAI

Artificial Intelligence www.reddit.com

- The University of Chicago has developed a tool called Nightshade 1.0, which poisons image files to deter AI models from using data without permission.

- Nightshade is a prompt-specific poisoning attack that blurs the boundaries of concepts in images, making text-to-image models less useful.

- The tool aims to protect content creators' intellectual property and ensure that models only train on freely offered data.

- Artists can use Nightshade to prevent the capture and reproduction of their visual styles, …

ai models artificial artists concepts data files image images making misuse nightshade prompt text text-to-image tool university university of chicago

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US