all AI news
Are Watermarks Bugs for Deepfake Detectors? Rethinking Proactive Forensics
April 30, 2024, 4:46 a.m. | Xiaoshuai Wu, Xin Liao, Bo Ou, Yuling Liu, Zheng Qin
cs.CV updates on arXiv.org arxiv.org
Abstract: AI-generated content has accelerated the topic of media synthesis, particularly Deepfake, which can manipulate our portraits for positive or malicious purposes. Before releasing these threatening face images, one promising forensics solution is the injection of robust watermarks to track their own provenance. However, we argue that current watermarking models, originally devised for genuine images, may harm the deployed Deepfake detectors when directly applied to forged images, since the watermarks are prone to overlap with the …
abstract ai-generated content arxiv bugs cs.cv deepfake deepfake detectors detectors eess.iv face forensics generated however images media portraits positive provenance robust solution synthesis type watermarks
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US