all AI news
Counterfactual Explanations for Face Forgery Detection via Adversarial Removal of Artifacts
April 15, 2024, 4:45 a.m. | Yang Li, Songlin Yang, Wei Wang, Ziwen He, Bo Peng, Jing Dong
cs.CV updates on arXiv.org arxiv.org
Abstract: Highly realistic AI generated face forgeries known as deepfakes have raised serious social concerns. Although DNN-based face forgery detection models have achieved good performance, they are vulnerable to latest generative methods that have less forgery traces and adversarial attacks. This limitation of generalization and robustness hinders the credibility of detection results and requires more explanations. In this work, we provide counterfactual explanations for face forgery detection from an artifact removal perspective. Specifically, we first invert …
abstract adversarial adversarial attacks ai generated arxiv attacks concerns counterfactual cs.cv deepfakes detection dnn face forgery generated generative good performance social traces type via vulnerable
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US