Feb. 3, 2024, 4:04 p.m. | /u/Successful-Western27

Machine Learning www.reddit.com

The rapid advancements in AI voice synthesis have given rise to incredibly convincing fake human speech, raising concerns about voice cloning and deepfake audio.

Passive analysis, the traditional approach to detecting fake audio, faces challenges as AI synthesis improves. These approaches tend to rely on artifacts, but these are model-specific. And models are improving in quality, reducing the number of artifacts.

Researchers at Meta and Inria have developed AudioSeal, a novel technique which can imperceptibly watermark AI-generated speech for detection. …

ai voice analysis audio challenges cloning concerns deepfake deepfake audio detection fake human machinelearning speech synthesis voice voice cloning voice synthesis watermarking

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Tableau/PowerBI Developer (A.Con)

@ KPMG India | Bengaluru, Karnataka, India

Software Engineer, Backend - Data Platform (Big Data Infra)

@ Benchling | San Francisco, CA