Feb. 26, 2024, 5:43 a.m. | Mahsa Salehi, Kalin Stefanov, Ehsan Shareghi

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.14982v1 Announce Type: cross
Abstract: In this paper we study the variations in human brain activity when listening to real and fake audio. Our preliminary results suggest that the representations learned by a state-of-the-art deepfake audio detection algorithm, do not exhibit clear distinct patterns between real and fake audio. In contrast, human brain activity, as measured by EEG, displays distinct patterns when individuals are exposed to fake versus real audio. This preliminary evidence enables future research directions in areas such …

abstract algorithm art arxiv audio brain brain activity clear cs.lg cs.sd deepfake deepfake audio detection eess.as evidence fake human paper patterns q-bio.nc results state study type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne