Feb. 3, 2024, 4:04 p.m. | /u/Successful-Western27

Machine Learning www.reddit.com

The rapid advancements in AI voice synthesis have given rise to incredibly convincing fake human speech, raising concerns about voice cloning and deepfake audio.

Passive analysis, the traditional approach to detecting fake audio, faces challenges as AI synthesis improves. These approaches tend to rely on artifacts, but these are model-specific. And models are improving in quality, reducing the number of artifacts.

Researchers at Meta and Inria have developed AudioSeal, a novel technique which can imperceptibly watermark AI-generated speech for detection. …

ai voice analysis audio challenges cloning concerns deepfake deepfake audio detection fake human machinelearning speech synthesis voice voice cloning voice synthesis watermarking

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US