April 15, 2024, 4:45 a.m. | Yang Li, Songlin Yang, Wei Wang, Ziwen He, Bo Peng, Jing Dong

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.08341v1 Announce Type: new
Abstract: Highly realistic AI generated face forgeries known as deepfakes have raised serious social concerns. Although DNN-based face forgery detection models have achieved good performance, they are vulnerable to latest generative methods that have less forgery traces and adversarial attacks. This limitation of generalization and robustness hinders the credibility of detection results and requires more explanations. In this work, we provide counterfactual explanations for face forgery detection from an artifact removal perspective. Specifically, we first invert …

abstract adversarial adversarial attacks ai generated arxiv attacks concerns counterfactual cs.cv deepfakes detection dnn face forgery generated generative good performance social traces type via vulnerable

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South