Dec. 20, 2023, 3:57 p.m. | Emilia David

The Verge - All Posts www.theverge.com


Photo Illustration by Rafael Henrique / SOPA Images / LightRocket via Getty Images


A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.


LAION-5B, a dataset used by Stable Diffusion creator Stability AI and Google’s Imagen image generators, included at least 1,679 illegal images scraped from social media posts and popular adult websites.


The researchers began combing through the LAION dataset in September …

abuse ai image ai models child child sexual abuse dataset diffusion found getty getty images illustration image image generation images internet laion photo popular stability stability ai stable diffusion stanford training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US