May 8, 2024, 4:42 a.m. | Daria Diatlova, Anton Udalov, Vitalii Shutov, Egor Spirin

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.04485v1 Announce Type: new
Abstract: Recently, the usage of speech self-supervised models (SSL) for downstream tasks has been drawing a lot of attention. While large pre-trained models commonly outperform smaller models trained from scratch, questions regarding the optimal fine-tuning strategies remain prevalent. In this paper, we explore the fine-tuning strategies of the WavLM Large model for the speech emotion recognition task on the MSP Podcast Corpus. More specifically, we perform a series of experiments focusing on using gender and semantic …

abstract arxiv attention cs.lg cs.sd eess.as emotion explore fine-tuning paper pre-trained models questions recognition scratch speech speech emotion ssl strategies tasks type usage while

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US