April 22, 2024, 11:01 p.m. | Alex Hern UK technology editor

Artificial intelligence (AI) | The Guardian www.theguardian.com

As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, can bans on creating such imagery be feasible?

Child abusers are creating AI-generated “deepfakes” of their targets in order to blackmail them into filming their own abuse, beginning a cycle of sextortion that can last for years.

Creating simulated child abuse imagery is illegal in the UK, and Labour and the Conservatives have aligned on the desire to ban all explicit AI-generated images of …

abuse ai image ai image generators artificial intelligence (ai) bans blackmail chatgpt child children child sexual abuse child sexual abuse material computing datasets deepfake deepfakes explicit deepfakes found generated generators image image generators material online abuse openai open source pornography sextortion software targets technology them training uk news world news

More from www.theguardian.com / Artificial intelligence (AI) | The Guardian

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town