Dec. 20, 2023, 3:57 p.m. | Emilia David

The Verge - All Posts www.theverge.com


Photo Illustration by Rafael Henrique / SOPA Images / LightRocket via Getty Images


A popular training dataset for AI image generation contained links to child abuse imagery, Stanford’s Internet Observatory found, potentially allowing AI models to create harmful content.


LAION-5B, a dataset used by Stable Diffusion creator Stability AI and Google’s Imagen image generators, included at least 1,679 illegal images scraped from social media posts and popular adult websites.


The researchers began combing through the LAION dataset in September …

abuse ai image ai models child child sexual abuse dataset diffusion found getty getty images illustration image image generation images internet laion photo popular stability stability ai stable diffusion stanford training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India